text
stringlengths
100
500k
subset
stringclasses
4 values
From discrete to continuous Wardrop equilibria NHM Home A semi-discrete approximation for a first order mean field game problem June 2012, 7(2): 243-261. doi: 10.3934/nhm.2012.7.243 Explicit solutions of some linear-quadratic mean field games Martino Bardi 1, Dipartimento di Matematica, Università di Padova, via Trieste, 63; I-35121 Padova, Italy Received November 2011 Revised March 2012 Published June 2012 We consider $N$-person differential games involving linear systems affected by white noise, running cost quadratic in the control and in the displacement of the state from a reference position, and with long-time-average integral cost functional. We solve an associated system of Hamilton-Jacobi-Bellman and Kolmogorov-Fokker-Planck equations and find explicit Nash equilibria in the form of linear feedbacks. Next we compute the limit as the number $N$ of players goes to infinity, assuming they are almost identical and with suitable scalings of the parameters. This provides a quadratic-Gaussian solution to a system of two differential equations of the kind introduced by Lasry and Lions in the theory of Mean Field Games [22]. Under a natural normalization the uniqueness of this solution depends on the sign of a single parameter. We also discuss some singular limits, such as vanishing noise, cheap control, vanishing discount. Finally, we compare the L-Q model with other Mean Field models of population distribution. Keywords: differential games, models of population distribution., stochastic control, linear-quadratic problems, Mean field games. Mathematics Subject Classification: Primary: 91A13, 49N70; Secondary: 93E20, 91A06, 49N1. Citation: Martino Bardi. Explicit solutions of some linear-quadratic mean field games. Networks & Heterogeneous Media, 2012, 7 (2) : 243-261. doi: 10.3934/nhm.2012.7.243 Y. Achdou, F. Camilli and I. Capuzzo-Dolcetta, Mean field games: Numerical methods for the planning problem,, SIAM J. Control Opt., 50 (2012), 77. doi: 10.1137/100790069. Google Scholar Y. Achdou and I. Capuzzo-Dolcetta, Mean field games: Numerical methods,, SIAM J. Numer. Anal., 48 (2010), 1136. doi: 10.1137/090758477. Google Scholar O. Alvarez and M. Bardi, Ergodic problems in differential games, in, Advances in Dynamic Game Theory, 9 (2007), 131. Google Scholar O. Alvarez and M. Bardi, Ergodicity, stabilization, and singular perturbations for Bellman-Isaacs equations,, Mem. Amer. Math. Soc., 204 (2010). Google Scholar R. J. Aumann, Markets with a continuum of traders,, Econometrica, 32 (1964), 39. doi: 10.2307/1913732. Google Scholar M. Bardi and I. Capuzzo Dolcetta, "Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations,", With appendices by Maurizio Falcone and Pierpaolo Soravia, (1997). Google Scholar T. Başar and G. J. Olsder, "Dynamic Noncooperative Game Theory,", Second edition, (1995). Google Scholar A. Bensoussan and J. Frehse, "Regularity Results for Nonlinear Elliptic Systems and Applications,", Applied Mathematical Sciences, 151 (2002). Google Scholar P. Cardaliaguet, "Notes on Mean Field Games,", from P.-L. Lions' lectures at Collège de France, (2010). Google Scholar J. C. Engwerda, "Linear Quadratic Dynamic Optimization and Differential Games,", Wiley, (2005). Google Scholar W. H. Fleming and H. M. Soner, "Controlled Markov Processes and Viscosity Solutions,", 2nd edition, 25 (2006). Google Scholar D. A. Gomes, J. Mohr and R. R. Souza, Discrete time, finite state space mean field games,, J. Math. Pures Appl. (9), 93 (2010), 308. Google Scholar O. Guéant, "Mean Field Games and Applications to Economics,", Ph.D. Thesis, (2009). Google Scholar O. Guéant, A reference case for mean field games models,, J. Math. Pures Appl. (9), 92 (2009), 276. Google Scholar O. Guéant, J.-M. Lasry and P.-L. Lions, Mean field games and applications,, in, 2003 (2011), 205. Google Scholar R. Z. Has'minskiĭ, "Stochastic Stability of Differential Equations,", Monographs and Textbooks on Mechanics of Solids and Fluids: Mechanics and Analysis, 7 (1980). Google Scholar M. Huang, P. E. Caines and R. P. Malhamé, Individual and mass behaviour in large population stochastic wireless power control problems: Centralized and Nash equilibrium solutions,, in, (2003), 98. Google Scholar M. Huang, P. E. Caines and R. P. Malhamé, Large population stochastic dynamic games: Closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle,, Commun. Inf. Syst., 6 (2006), 221. Google Scholar M. Huang, P. E. Caines and R. P. Malhamé, Large-population cost-coupled LQG problems with nonuniform agents: Individual-mass behavior and decentralized $\epsilon$-Nash equilibria,, IEEE Trans. Automat. Control, 52 (2007), 1560. doi: 10.1109/TAC.2007.904450. Google Scholar M. Huang, P. E. Caines and R. P. Malhamé, An invariance principle in large population stochastic dynamic games,, J. Syst. Sci. Complex., 20 (2007), 162. doi: 10.1007/s11424-007-9015-4. Google Scholar A. Lachapelle, J. Salomon and G. Turinici, Computation of mean field equilibria in economics,, Math. Models Methods Appl. Sci., 20 (2010), 567. doi: 10.1142/S0218202510004349. Google Scholar J.-M. Lasry and P.-L. Lions, Jeux à champ moyen. I. Le cas stationnaire,, C. R. Acad. Sci. Paris, 343 (2006), 619. doi: 10.1016/j.crma.2006.09.019. Google Scholar J.-M. Lasry and P.-L. Lions, Jeux à champ moyen. II. Horizon fini et contrôle optimal,, C. R. Acad. Sci. Paris, 343 (2006), 679. doi: 10.1016/j.crma.2006.09.018. Google Scholar J.-M. Lasry and P.-L. Lions, Mean field games,, Jpn. J. Math., 2 (2007), 229. Google Scholar Jianhui Huang, Xun Li, Jiongmin Yong. A linear-quadratic optimal control problem for mean-field stochastic differential equations in infinite horizon. Mathematical Control & Related Fields, 2015, 5 (1) : 97-139. doi: 10.3934/mcrf.2015.5.97 Tyrone E. Duncan. Some linear-quadratic stochastic differential games for equations in Hilbert spaces with fractional Brownian motions. Discrete & Continuous Dynamical Systems - A, 2015, 35 (11) : 5435-5445. doi: 10.3934/dcds.2015.35.5435 Kai Du, Jianhui Huang, Zhen Wu. Linear quadratic mean-field-game of backward stochastic differential systems. Mathematical Control & Related Fields, 2018, 8 (3&4) : 653-678. doi: 10.3934/mcrf.2018028 Olivier Guéant. New numerical methods for mean field games with quadratic costs. Networks & Heterogeneous Media, 2012, 7 (2) : 315-336. doi: 10.3934/nhm.2012.7.315 Hanxiao Wang, Jingrui Sun, Jiongmin Yong. Weak closed-loop solvability of stochastic linear-quadratic optimal control problems. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2785-2805. doi: 10.3934/dcds.2019117 Libin Mou, Jiongmin Yong. Two-person zero-sum linear quadratic stochastic differential games by a Hilbert space method. Journal of Industrial & Management Optimization, 2006, 2 (1) : 95-117. doi: 10.3934/jimo.2006.2.95 Tyrone E. Duncan. Some partially observed multi-agent linear exponential quadratic stochastic differential games. Evolution Equations & Control Theory, 2018, 7 (4) : 587-597. doi: 10.3934/eect.2018028 Galina Kurina, Sahlar Meherrem. Decomposition of discrete linear-quadratic optimal control problems for switching systems. Conference Publications, 2015, 2015 (special) : 764-774. doi: 10.3934/proc.2015.0764 Alain Bensoussan, Shaokuan Chen, Suresh P. Sethi. Linear quadratic differential games with mixed leadership: The open-loop solution. Numerical Algebra, Control & Optimization, 2013, 3 (1) : 95-108. doi: 10.3934/naco.2013.3.95 Tianxiao Wang. Characterizations of equilibrium controls in time inconsistent mean-field stochastic linear quadratic problems. I. Mathematical Control & Related Fields, 2019, 9 (2) : 385-409. doi: 10.3934/mcrf.2019018 Pierre Cardaliaguet, Jean-Michel Lasry, Pierre-Louis Lions, Alessio Porretta. Long time average of mean field games. Networks & Heterogeneous Media, 2012, 7 (2) : 279-301. doi: 10.3934/nhm.2012.7.279 Fabio Camilli, Elisabetta Carlini, Claudio Marchi. A model problem for Mean Field Games on networks. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4173-4192. doi: 10.3934/dcds.2015.35.4173 Martin Burger, Marco Di Francesco, Peter A. Markowich, Marie-Therese Wolfram. Mean field games with nonlinear mobilities in pedestrian dynamics. Discrete & Continuous Dynamical Systems - B, 2014, 19 (5) : 1311-1333. doi: 10.3934/dcdsb.2014.19.1311 Yves Achdou, Manh-Khang Dao, Olivier Ley, Nicoletta Tchou. A class of infinite horizon mean field games on networks. Networks & Heterogeneous Media, 2019, 14 (3) : 537-566. doi: 10.3934/nhm.2019021 Josu Doncel, Nicolas Gast, Bruno Gaujal. Discrete mean field games: Existence of equilibria and convergence. Journal of Dynamics & Games, 2019, 0 (0) : 1-19. doi: 10.3934/jdg.2019016 Piernicola Bettiol. State constrained $L^\infty$ optimal control problems interpreted as differential games. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 3989-4017. doi: 10.3934/dcds.2015.35.3989 Shigeaki Koike, Hiroaki Morimoto, Shigeru Sakaguchi. A linear-quadratic control problem with discretionary stopping. Discrete & Continuous Dynamical Systems - B, 2007, 8 (2) : 261-277. doi: 10.3934/dcdsb.2007.8.261 Russell Johnson, Carmen Núñez. Remarks on linear-quadratic dissipative control systems. Discrete & Continuous Dynamical Systems - B, 2015, 20 (3) : 889-914. doi: 10.3934/dcdsb.2015.20.889 Salah Eddine Choutri, Boualem Djehiche, Hamidou Tembine. Optimal control and zero-sum games for Markov chains of mean-field type. Mathematical Control & Related Fields, 2019, 9 (3) : 571-605. doi: 10.3934/mcrf.2019026 Walter Alt, Robert Baier, Matthias Gerdts, Frank Lempio. Error bounds for Euler approximation of linear-quadratic control problems with bang-bang solutions. Numerical Algebra, Control & Optimization, 2012, 2 (3) : 547-570. doi: 10.3934/naco.2012.2.547 HTML views (0) Martino Bardi
CommonCrawl
Vital Surveillances: Epidemiological Characteristics of Human Brucellosis — China, 2016−2019 Zhongfa Tao1,2,3; Qiulan Chen1, , ; Yishan Chen1,4; Yu Li1; Di Mu1; Huimin Yang1; Wenwu Yin1 View author affiliations Division of Infectious Disease, Key Laboratory of Surveillance and Early Warning on Infectious Disease, Chinese Center for Disease Control and Prevention, Beijing, China Guizhou Provincial Center for Disease Control and Prevention, Guiyang, Guizhou, China China Field Epidemiology Training Project, Beijing, China Emory University, Atlanta, GA, USA Corresponding author: Qiulan Chen, [email protected] [1] World Health Organization. Brucellosis. 2020. https://www.who.int/news-room/fact-sheets/detail/brucellosis. [2021-1-17].https://www.who.int/news-room/fact-sheets/detail/brucellosis [2] Franco MP, Mulder M, Gilman RH, Smits HL. Human brucellosis. Lancet Infect Dis 2007;7(12):775 − 86. http://dx.doi.org/10.1016/S1473-3099(07)70286-4CrossRef [3] Dean AS, Crump L, Greter H, Hattendorf J, Schelling E, Zinsstag J. Clinical manifestations of human brucellosis: a systematic review and meta-analysis. PLoS Negl Trop Dis 2012;6(12):e1929. http://dx.doi.org/10.1371/journal.pntd.0001929CrossRef [4] Zheng RJ, Xie SS, Lu XB, Sun LH, Zhou Y, Zhang YX, et al. A systematic review and meta-analysis of epidemiology and clinical manifestations of human brucellosis in China. BioMed Res Int 2018;2018:5712920. http://dx.doi.org/10.1155/2018/5712920CrossRef [5] Dean AS, Crump L, Greter H, Schelling E, Zinsstag J. Global burden of human brucellosis: a systematic review of disease frequency. PLoS Negl Trop Dis 2012;6(10):e1865. http://dx.doi.org/10.1371/journal.pntd.0001865CrossRef [6] Pappas G, Papadimitriou P, Akritidis N, Christou L, Tsianos EV. The new global map of human brucellosis. Lancet Infect Dis 2006;6(2):91 − 9. http://dx.doi.org/10.1016/s1473-3099(06)70382-6CrossRef [7] Shang DQ, Xiao DL, Yin JM. Epidemiology and control of brucellosis in China. Vet Microbiol 2002;90(1 − 4):165 − 82. http://dx.doi.org/10.1016/s0378-1135(02)00252-3CrossRef [8] Lai SJ, Zhou H, Xiong WY, Yu HJ, Huang ZJ, Yu JX, et al. Changing epidemiology of human brucellosis, China, 1955-2014. Emerg Infect Dis 2017;23(2):184 − 94. http://dx.doi.org/10.3201/eid2302.151710CrossRef [9] Shi YJ, Lai SJ, Chen QL, Mu D, Li Y, Li XX, et al. Analysis on the epidemiological features of human brucellosis in northern and southern areas of China, 2015–2016. Chin J Epidemiol 2017;38(4):435 − 40. http://dx.doi.org/10.3760/cma.j.issn.0254-6450.2017.04.005 (In Chinese). CrossRef [10] Chen QL, Lai SJ, Yin WW, Zhou H, Li Y, Mu D, et al. Epidemic characteristics, high-risk townships and space-time clusters of human brucellosis in Shanxi Province of China, 2005–2014. BMC Infect Dis 2016;16(1):760. http://dx.doi.org/10.1186/s12879-016-2086-xCrossRef [11] Ministry of Agriculture and Rural Affairs of the People's Republic of China. National brucellosis control program, 2016–2020. 2020. http://www.moa.gov.cn/govpublic/SYJ/201609/t20160909_5270524.htm. [2021-1-11]. (In Chinese). http://www.moa.gov.cn/govpublic/SYJ/201609/t20160909_5270524.htm [12] Yang XJ, Yang XJ, Wu YH, Li SG, He B. Evaluation on the purification of Brucellosis in Yizhou district, Hami city of Xinjiang during 2017 to 2019. China Anim Quarant 2020;37(6):9 − 11,15. http://dx.doi.org/10.3969/j.issn.1005-944X.2020.06.003 (In Chinese). CrossRef [13] Liu WJ, He CW, Song HT, Li Y. Evaluation and analysis about the effectiveness of comprehensive control measures against animal brucellosis in Changji Prefecture. Anim Husband Xinjiang 2019;34(2):30 − 4. http://dx.doi.org/10.16795/j.cnki.xjxmy.2019.2.005 (In Chinese). CrossRef [14] Xu CZ, Piao DR, Jiang H, Liu HL, Li M, Tian HR, et al. Study on epidemiological characteristics and pathogen typing of brucellosis in Alxa league, Inner Mongolia, China. Chin J Vector Biol Control, 2020;31(6):648 − 51,666. https://kns.cnki.net/kcms/detail/10.1522.R.20201103.1621.028.html. (In Chinese)https://kns.cnki.net/kcms/detail/10.1522.R.20201103.1621.028.html [15] Chen LP, Zhang M, Li XS, Lu JG. Prevalence status of brucellosis among humans and animals in China. Chin J Anim Quarant 2018;35(10):1 − 5. http://dx.doi.org/10.3969/j.issn.1005-944X.2018.10.001 (In Chinese). CrossRef FIGURE 1. Monthly distribution of human brucellosis in northern and southern China, 2016–2019. FIGURE 2. Human brucellosis in the ten provincial-level administrative divisions (PLADs) with the highest incidence rate of cases reported from 2016 to 2019. FIGURE 3. Age and sex distribution of annual incidence (1/100,000) of human brucellosis by north and south from 2016 to 2019 in China. (A) Age and sex distribution of human brucellosis in northern China, 2016−2019. (B) Age and sex distribution of human brucellosis in southern China, 2016−2019. TABLE 1. Comparison of human brucellosis between northern and southern China PLADs by numbers of affected counties and incidence (1/100,000), 2016–2019. Year Region Affected counties Incidence* (per 100,000) Number Percentage (%) 2016 South 591 40.4 (591/1,463) 0.3 North 1,286 86.8 (1,286/1,482) 7.8 Total 1,877 63.7 (1,877/2,945) 3.4 * Incidence was calculated as the total number of cases in each region divided by the total population in the same region. Download: CSV Epidemiological Characteristics of Human Brucellosis — China, 2016−2019 1. Division of Infectious Disease, Key Laboratory of Surveillance and Early Warning on Infectious Disease, Chinese Center for Disease Control and Prevention, Beijing, China 2. Guizhou Provincial Center for Disease Control and Prevention, Guiyang, Guizhou, China 3. China Field Epidemiology Training Project, Beijing, China 4. Emory University, Atlanta, GA, USA Qiulan Chen, [email protected] Introduction:Brucellosis is an important zoonotic infectious disease with its main mode of transmission from livestock to humans. The study analyzed epidemiological characteristics of human brucellosis from 2016 to 2019 in China, aiming to understand progress of the National Program of Brucellosis Prevention and Control. Methods:The research obtained data on human brucellosis cases reported through China's National Notifiable Disease Reporting System (NNDRS) from January 1, 2016 to December 31, 2019 and described brucellosis epidemiological patterns by region, seasonality, age, sex, and occupation. Results:The number of cases reported nationwide in China decreased from 47,139 (3.4/100,000) in 2016 to 37,947 (2.7/100,000) in 2018, and then increased to 44,036 (3.2/100,000) in 2019, with an average annual incidence of 3.0/100,000 during the four study years. Brucellosis in Xinjiang declined from 35.6/100,000 in 2016 to 16.3/100,000 in 2019 — an average annual decrease of 22.9%. Brucellosis in Inner Mongolia increased from 23.8/100,000 in 2016 to 54.4/100,000 in 2019 — an average increase of 31.8% per year and accounting for 22% of all reported cases. Northern China reported 95.2% of cases during this period and still had an incidence of 7.2/100,000 and 87.0% of counties being affected by brucellosis in 2019. In this region in 2019, males aged 45-64 years old had an incidence of over 15.9/100,000, compared with over 7.0/100,000 among females aged 45-64 years old. Conclusions:Although there was progress in prevention and control of human brucellosis in some provincial-level administrative divisions (PLADs) in 2016 through 2019, progress was limited nationwide and there was an overall resurgence of brucellosis in 2019. The resurgence was primarily in Inner Mongolia. An One Health approach should be strengthened to ensure successful and sustainable brucellosis prevention and control in China. Brucellosis is a zoonotic disease caused by various Brucella species (1). Humans are infected most often though contact with sick animals, especially goats, sheep, and cattle, and through consumption of contaminated milk and milk products such as fresh cheese (1–2). In China, the main mode of transmission is contact with sick livestock such as sheep, goats, and cattle (3). Clinical features during the acute phase of human brucellosis are fever, hyperhidrosis, fatigue, and joint and muscle pain. If timely and effective treatment is not available during the acute phase, the infection can become chronic, which causes great suffering (4-5). More than 170 countries and regions in the world have reported human and animal brucellosis (6). In the 1950s, brucellosis was widespread in China, with infection rates of human and animal brucellosis as high as 50% in severely affected areas. Following strengthened prevention and control measures based on the One Health approach, the brucellosis epidemic declined (7). However, at the beginning of the 21st century, human brucellosis had a resurgence in China, with sharply increasing incidences and widely expanded affected areas in the north and the south (8-9). In 2014 and 2015, the number of reported cases exceeded 50,000 per year, and the annual incidence rates were 4.22/100,000 and 4.18/100,000, which were historically high levels (8,10). To prevent and control human brucellosis, China's Ministry of Agriculture and the National Commission of Health and Family Planning in 2016 co-issued a National Brucellosis Prevention and Control Plan (2016–2020) (11). The study analyzed the epidemiological characteristics of human brucellosis from 2016 to 2019 in China to evaluate progress of the national plan. The study obtained data on human brucellosis cases reported through China's National Notifiable Disease Reporting System (NNDRS) from January 1, 2016 to December 31, 2019. NNDRS is an Internet-based national passive surveillance system that covers all township health centers and all levels of hospitals across the country. The research compared numbers of annual cases and incidences by region, age, gender, and occupation. This study evaluated the time trend of human brucellosis of all the provincial-level administrative divisions (PLADs) with average annual growth of annual incidence. The average annual growth rate of annual incidence from 2016 to 2019 was defined as: $ \sqrt[3]{\dfrac{annual \;incidence\; in\; 2019}{annual\; incidence \;in\; 2016}}-1) \times 100\% $ . The research compared the number and percent of affected counties, number of reported cases, incidence, and median number of cases at the county level between southern and northern China. Southern PLADs including Jiangsu, Shanghai, Zhejiang, Anhui, Hunan, Hubei, Sichuan, Chongqing, Guizhou, Yunnan, Guangxi, Guangdong, Hainan, Fujian, and Jiangxi; northern PLADs included Heilongjiang, Jilin, Liaoning, Beijing, Tianjin, Inner Mongolia, Shaanxi, Hebei, Henan, Ningxia, Shanxi, Shandong, Gansu, Qinghai, Xinjiang, and Tibet. All statistical analyses were performed using SAS (version 9.4, SAS Institute Inc., Cary, USA) and the figures was drawn using Microsoft Excel (version 2007). From 2016 to 2019, a total of 167,676 cases of human brucellosis was reported to NNDRS in the mainland of China, for an average annual incidence of 3.02/100,000 population. The annual number of cases reported nationwide was 47,139 (3.43/100,000) in 2016 and decreased to 38,554 (2.79/100,000) in 2017 and 37,947 (2.73/100,000) in 2018. Reported cases increased to 44,036 (3.15/100,000) in 2019. The peak season for human infections (by date of illness onset) was from March to August, accounting for 64.5% of cases in 2016–2019. The north and south had similar seasonal distributions (Figure 1). Human brucellosis was reported in all 31 PLADs of the mainland of China; 95.2% (159,667) were reported from northern PLADs. Inner Mongolia reported the most cases (36,805 cases; 22.0% of all reports) and had an average annual incidence of 36.5/100,000. The other 10 PLADs with the highest number of cases were located in northern PLADs — Ningxia, Heilongjiang, Shanxi, Gansu, Liaoning, Jilin, Hebei, Henan, and Shandong (Figure 2), with average annual incidences ranging from 3.1/100,000 to 28.2/100,000. The incidence of human brucellosis was less than 1.0/100,000 in all southern PLADs; in Shanghai, the incidence was less than 0.1/100,000. High-burden PLADs differed in annual incidence patterns (Figure 2). Inner Mongolia had an upward trend with an annual incidence increasing from 23.8/100,000 in 2016 to 54.4/100,000 in 2019 — an average annual increase of 31.8%. Ningxia, Shanxi, Gansu, Shaanxi, Liaoning, and Hebei's annual incidences declined and then increased. In Ningxia, brucellosis increased more than 8/100,000 from 2018 to 2019. In Heilongjiang, Jilin, and Henan, incidences declined in 2017 and remained relatively stable in 2018 and 2019. Xinjiang's annual incidence decreased from 35.6/100,000 in 2016 to 16.3/100,000 in 2019 — an average annual decrease of 22.9% (Figure 2, Supplementary Table S1). Several provinces in southern China had significant annual increases in incidence, including Hainan (39.7%), Fujian (23.8%), Anhui (19.2%), and Hunan (7.5%) (Supplementary Table S1). The percent of counties affected by brucellosis increased from 63.7% in 2016 to 65.9% in 2017 and decreased to 64.3% in 2019. Most affected counties were in northern PLADs. The percent of affected counties in southern PLADs increased from 40.4% in 2016 to 41.2% in 2019. The annual incidence of human brucellosis varied by county, with median incidences (interquartile range [IQR]) of 2.0 (0.5, 7.3)/100,000 in 2016 and 1.7 (0.5, 5.7)/100,000 in 2019. Counties in northern PLADs had higher median incidences than counties in southern PLADs: 4.4 (IQR: 1.6, 13.1)/100,000 vs. 0.4 (0.2, 0.7)/100,000 in 2016, and 3.4 (1.4, 10.9)/100,000 vs. 0.3 (0.2, 0.6)/100,000 in 2019. The 10 counties with the highest average annual incidence had incidences ranging from 124.5/100,000 to 265.0/100,000; among these, 6 were in Inner Mongolia, 3 in Xinjiang, and 1 in Ningxia. In 2019, the median of cases reported by county was 7 and was higher in northern China than in southern China (13 vs. 2) (Table 1, Supplementary Table S2). Farming and herding were the most common occupations of reported cases, accounting for 83.8% of reports. Houseworkers and unemployed individuals, students, and migrating individuals and kindergarten children accounted for 4.5%, 1.9%, and 1.1% of cases, respectively. The incidence among males was higher than that among females in all age groups and in the south and north, except for those aged 0–4 years and 5–9 (Figure 3 and Supplementary Table S3). People aged 45-64 years old had higher risk of infection than younger people. Those aged 45-64 years old had an incidence over 15.9/100,000 in the north in 2019, compared with over 7.0/100,000 among females (Figure 3 and Supplementary Table S3). Our study found that human brucellosis had a resurgence in 2019 in China, although the overall epidemic level was nearly 30% lower in 2019 than in the peak year of 2014. Inner Mongolia had a rapid increase that started in 2017, and its surrounding provinces experienced a subsequent resurgence. Northern China reported the vast majority of cases and had a much higher incidence than did southern China. Older male farmers and herders were the highest risk populations. According to the National Brucellosis Prevention and Control Plan (2016–2020), the targets of brucellosis control by the end of 2020 include: 1) counties in 11 PLADs including Hebei, Shanxi, Inner Mongolia, Liaoning, Jilin, Heilongjiang, Shaanxi, Gansu, Qinghai, Ningxia, Xinjiang, should have reached and maintained control standards (an indicator is that new human cases decrease compare with the previous year); 2) counties in Hainan Province should have reached the elimination standard (an indicator is that there should be no new confirmed human cases for 3 consecutive years); and 3) counties in the other PLADs should have reached near-elimination standard (an indicator is that there were no new confirmed human cases for 2 consecutive years). Our study showed that in 2019, all PLADs in the mainland of China, including Hainan, still had reported human brucellosis cases, and that nearly 90% of northern counties and more than 40% of southern counties were affected. We found that the national program had limited progress and did not achieve these objectives by the end of 2019. Animal brucellosis control is the most effective strategy for human brucellosis control (1). In addition to health education, early detection, and proper treatment for high-risk populations, the national program required highly endemic areas to implement a mass animal vaccination strategy and used a "quarantine-vaccination-slaughter" policy in the less-affected areas (i.e. to vaccinate the negatives and safely slaughter the animals testing positive based on quarantine results). Some areas succeeded, with Xinjiang as a good example. As one of the endemic regions in China, Xinjiang implemented a mass animal vaccination campaign against brucellosis starting in 2016 (12) and has seen a dramatical decrease in the incidence of human brucellosis. A study in Hami Prefecture of Xinjiang showed that the brucellosis infection rate among local cattle and sheep dropped significantly from 2017 to 2019 (13). There are several possible reasons for the increasing incidence in Inner Mongolia and the subsequent rebound in the surrounding provinces. First, the market price of beef and lamb has been rising since 2016 and the number of people engaged in breeding, purchasing, selling, and trading has increased. Second, the movement of livestock increased with the growing demand and has led to spread of brucellosis via sick animals due to the lack of proper quarantine and inspection. For example, a genotype study showed that Brucella strains in Inner Mongolia were related to those in neighboring provinces (14). Third, the investment of resources for controlling brucellosis might have been reduced, and activities based on One Health principles decreased after consecutive declines in previous years — a possible reason based on impressions during field visits to endemic areas. Northern China had a more severe epidemic, which is consistent with previous studies (8–9). This may be because husbandry of goats, sheep, and cattle is more common in rural areas in northern China. The main pastoral areas and the historically epidemic zone of brucellosis are both located in northern China. Although the southern PLADs had a small proportion of cases, the affected areas in the south expanded and some had rapid increases in incidence. This expansion may be related to trading and movement of livestock from north to south, which is a phenomenon deserving more attention. The incidence of brucellosis has clear seasonal characteristics with peaks in the late spring and summer, which was consistent with previous studies (8–9). Farmers and herders remain high-risk occupations. Continuous effort is needed to promote proper use of personal protection equipment to prevent infection. The incidence among males was higher than among females, which was also found in previous research (8-10,15), but the incidence of males under 10 years of age was similar to the incidence of same-age females. This may represent lower exposure risk of brucellosis in children under 10 years old, while older-age men are more likely to have occupational exposures through breeding livestock. Our study has limitations. The data were obtained from NNDRS and may be impacted by the case finding ability of different areas. Our incidence estimates may underestimate the true incidence of brucellosis, as it is a globally underreported disease due to its typical symptoms and signs at the early stage. However, surveillance data can still reflect trends of the epidemic, something that is supported by the stability and even quality of NNDRS in China. NNDRS has the highest quality and most complete data that are currently available for evaluation of the epidemiological characteristics and progress towards national program goals. In conclusion, from 2016 to 2019, although human brucellosis prevention and control showed progress in some PLADs, progress was limited nationwide and brucellosis had a resurgence in 2019. The resurgence was most prominent in Inner Mongolia. An One Health approach should be strengthened to ensure successful and sustainable brucellosis prevention and control in China. Health education campaigns should be focus on middle- and older-age groups of farmers and herders in northern China. Acknowledgements:Dr. Lance Rodewald, senior advisor of China CDC; health staff of CDCs at the provincial, prefecture, and county levels in all PLADs in the mainland of China. Conflicts of interest: No reported conflicts. All authors have submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Funding:This work was supported by National Science and Technology Major Project of China (2018ZX10101002-003-002). [1] World Health Organization. Brucellosis. 2020. https://www.who.int/news-room/fact-sheets/detail/brucellosis. [2021-1-17]. [2] Franco MP, Mulder M, Gilman RH, Smits HL. Human brucellosis. Lancet Infect Dis 2007;7(12):775 − 86. http://dx.doi.org/10.1016/S1473-3099(07)70286-4. [3] Dean AS, Crump L, Greter H, Hattendorf J, Schelling E, Zinsstag J. Clinical manifestations of human brucellosis: a systematic review and meta-analysis. PLoS Negl Trop Dis 2012;6(12):e1929. http://dx.doi.org/10.1371/journal.pntd.0001929. [4] Zheng RJ, Xie SS, Lu XB, Sun LH, Zhou Y, Zhang YX, et al. A systematic review and meta-analysis of epidemiology and clinical manifestations of human brucellosis in China. BioMed Res Int 2018;2018:5712920. http://dx.doi.org/10.1155/2018/5712920. [5] Dean AS, Crump L, Greter H, Schelling E, Zinsstag J. Global burden of human brucellosis: a systematic review of disease frequency. PLoS Negl Trop Dis 2012;6(10):e1865. http://dx.doi.org/10.1371/journal.pntd.0001865. [6] Pappas G, Papadimitriou P, Akritidis N, Christou L, Tsianos EV. The new global map of human brucellosis. Lancet Infect Dis 2006;6(2):91 − 9. http://dx.doi.org/10.1016/s1473-3099(06)70382-6. [7] Shang DQ, Xiao DL, Yin JM. Epidemiology and control of brucellosis in China. Vet Microbiol 2002;90(1 − 4):165 − 82. http://dx.doi.org/10.1016/s0378-1135(02)00252-3. [8] Lai SJ, Zhou H, Xiong WY, Yu HJ, Huang ZJ, Yu JX, et al. Changing epidemiology of human brucellosis, China, 1955-2014. Emerg Infect Dis 2017;23(2):184 − 94. http://dx.doi.org/10.3201/eid2302.151710. [9] Shi YJ, Lai SJ, Chen QL, Mu D, Li Y, Li XX, et al. Analysis on the epidemiological features of human brucellosis in northern and southern areas of China, 2015–2016. Chin J Epidemiol 2017;38(4):435 − 40. http://dx.doi.org/10.3760/cma.j.issn.0254-6450.2017.04.005. (In Chinese). [10] Chen QL, Lai SJ, Yin WW, Zhou H, Li Y, Mu D, et al. Epidemic characteristics, high-risk townships and space-time clusters of human brucellosis in Shanxi Province of China, 2005–2014. BMC Infect Dis 2016;16(1):760. http://dx.doi.org/10.1186/s12879-016-2086-x. [11] Ministry of Agriculture and Rural Affairs of the People's Republic of China. National brucellosis control program, 2016–2020. 2020. http://www.moa.gov.cn/govpublic/SYJ/201609/t20160909_5270524.htm. [2021-1-11]. (In Chinese). [12] Yang XJ, Yang XJ, Wu YH, Li SG, He B. Evaluation on the purification of Brucellosis in Yizhou district, Hami city of Xinjiang during 2017 to 2019. China Anim Quarant 2020;37(6):9 − 11,15. http://dx.doi.org/10.3969/j.issn.1005-944X.2020.06.003. (In Chinese). [13] Liu WJ, He CW, Song HT, Li Y. Evaluation and analysis about the effectiveness of comprehensive control measures against animal brucellosis in Changji Prefecture. Anim Husband Xinjiang 2019;34(2):30 − 4. http://dx.doi.org/10.16795/j.cnki.xjxmy.2019.2.005. (In Chinese). [14] Xu CZ, Piao DR, Jiang H, Liu HL, Li M, Tian HR, et al. Study on epidemiological characteristics and pathogen typing of brucellosis in Alxa league, Inner Mongolia, China. Chin J Vector Biol Control, 2020;31(6):648 − 51,666. https://kns.cnki.net/kcms/detail/10.1522.R.20201103.1621.028.html. (In Chinese) [15] Chen LP, Zhang M, Li XS, Lu JG. Prevalence status of brucellosis among humans and animals in China. Chin J Anim Quarant 2018;35(10):1 − 5. http://dx.doi.org/10.3969/j.issn.1005-944X.2018.10.001. (In Chinese).
CommonCrawl
Search Results: 1 - 10 of 178339 matches for " Thomas H. Holmes " Habitat Associations of Juvenile Fish at Ningaloo Reef, Western Australia: The Importance of Coral and Algae Shaun K. Wilson,Martial Depczynski,Rebecca Fisher,Thomas H. Holmes,Rebecca A. O'Leary,Paul Tinkler PLOS ONE , 2012, DOI: 10.1371/journal.pone.0015185 Abstract: Habitat specificity plays a pivotal role in forming community patterns in coral reef fishes, yet considerable uncertainty remains as to the extent of this selectivity, particularly among newly settled recruits. Here we quantified habitat specificity of juvenile coral reef fish at three ecological levels; algal meadows vs. coral reefs, live vs. dead coral and among different coral morphologies. In total, 6979 individuals from 11 families and 56 species were censused along Ningaloo Reef, Western Australia. Juvenile fishes exhibited divergence in habitat use and specialization among species and at all study scales. Despite the close proximity of coral reef and algal meadows (10's of metres) 25 species were unique to coral reef habitats, and seven to algal meadows. Of the seven unique to algal meadows, several species are known to occupy coral reef habitat as adults, suggesting possible ontogenetic shifts in habitat use. Selectivity between live and dead coral was found to be species-specific. In particular, juvenile scarids were found predominantly on the skeletons of dead coral whereas many damsel and butterfly fishes were closely associated with live coral habitat. Among the coral dependent species, coral morphology played a key role in juvenile distribution. Corymbose corals supported a disproportionate number of coral species and individuals relative to their availability, whereas less complex shapes (i.e. massive & encrusting) were rarely used by juvenile fish. Habitat specialisation by juvenile species of ecological and fisheries importance, for a variety of habitat types, argues strongly for the careful conservation and management of multiple habitat types within marine parks, and indicates that the current emphasis on planning conservation using representative habitat areas is warranted. Furthermore, the close association of many juvenile fish with corals susceptible to climate change related disturbances suggests that identifying and protecting reefs resilient to this should be a conservation priority. Speed calculations for random walks in degenerate random environments Mark Holmes,Thomas S. Salisbury Abstract: We calculate explicit speeds for random walks in uniform degenerate random environments. For certain non-uniform random environments, we calculate speeds that are non-monotone. Forward clusters for degenerate random environments Abstract: We consider connectivity properties and asymptotic slopes for certain random directed graphs on $Z^2$ in which the set of points $C_o$ that the origin connects to is always infinite. We obtain conditions under which the complement of $C_o$ has no infinite connected component. Applying these results to one of the most interesting such models leads to an improved lower bound for the critical occupation probability for oriented site percolation on the triangular lattice in 2 dimensions. Random walks in degenerate random environments Abstract: We study the asymptotic behaviour of random walks in i.i.d. random environments on $\Z^d$. The environments need not be elliptic, so some steps may not be available to the random walker. We prove a monotonicity result for the velocity (when it exists) for any 2-valued environment, and show that this does not hold for 3-valued environments without additional assumptions. We give a proof of directional transience and the existence of positive speeds under strong, but non-trivial conditions on the distribution of the environment. Our results include generalisations (to the non-elliptic setting) of 0-1 laws for directional transience, and in 2-dimensions the existence of a deterministic limiting velocity. A combinatorial result with applications to self-interacting random walks Abstract: We give a series of combinatorial results that can be obtained from any two collections (both indexed by $\Z\times \N$) of left and right pointing arrows that satisfy some natural relationship. When applied to certain self-interacting random walk couplings, these allow us to reprove some known transience and recurrence results for some simple models. We also obtain new results for one-dimensional multi-excited random walks and for random walks in random environments in all dimensions. Degenerate random environments Abstract: We consider connectivity properties of certain i.i.d. random environments on $\Z^d$, where at each location some steps may not be available. Site percolation and oriented percolation can be viewed as special cases of the models we consider. In such models, one of the quantities most often studied is the (random) set of vertices that can be reached from the origin by following a connected path. More generally, for the models we consider, multiple different types of connectivity are of interest, including: the set of vertices that can be reached from the origin; the set of vertices from which the origin can be reached; the intersection of the two. As with percolation models, many of the models we consider admit, or are expected to admit phase transitions. Among the main results of the paper is a proof of the existence of phase transitions for some two-dimensional models that are non-monotone in their underlying parameter, and an improved bound on the critical value for oriented site percolation on the triangular lattice. The connectivity of the random directed graphs provides a foundation for understanding the asymptotic properties of random walks in these random environments, which we study in a second paper. Alphacoronaviruses in New World Bats: Prevalence, Persistence, Phylogeny, and Potential for Interaction with Humans Christina Osborne,Paul M. Cryan,Thomas J. O'Shea,Lauren M. Oko,Christina Ndaluka,Charles H. Calisher,Andrew D. Berglund,Mead L. Klavetter,Richard A. Bowen,Kathryn V. Holmes,Samuel R. Dominguez Abstract: Bats are reservoirs for many different coronaviruses (CoVs) as well as many other important zoonotic viruses. We sampled feces and/or anal swabs of 1,044 insectivorous bats of 2 families and 17 species from 21 different locations within Colorado from 2007 to 2009. We detected alphacoronavirus RNA in bats of 4 species: big brown bats (Eptesicus fuscus), 10% prevalence; long-legged bats (Myotis volans), 8% prevalence; little brown bats (Myotis lucifugus), 3% prevalence; and western long-eared bats (Myotis evotis), 2% prevalence. Overall, juvenile bats were twice as likely to be positive for CoV RNA as adult bats. At two of the rural sampling sites, CoV RNAs were detected in big brown and long-legged bats during the three sequential summers of this study. CoV RNA was detected in big brown bats in all five of the urban maternity roosts sampled throughout each of the periods tested. Individually tagged big brown bats that were positive for CoV RNA and later sampled again all became CoV RNA negative. Nucleotide sequences in the RdRp gene fell into 3 main clusters, all distinct from those of Old World bats. Similar nucleotide sequences were found in amplicons from gene 1b and the spike gene in both a big-brown and a long-legged bat, indicating that a CoV may be capable of infecting bats of different genera. These data suggest that ongoing evolution of CoVs in bats creates the possibility of a continued threat for emergence into hosts of other species. Alphacoronavirus RNA was detected at a high prevalence in big brown bats in roosts in close proximity to human habitations (10%) and known to have direct contact with people (19%), suggesting that significant potential opportunities exist for cross-species transmission of these viruses. Further CoV surveillance studies in bats throughout the Americas are warranted. Perceived Difficulty of Friendship Maintenance Online: Geographic Factors [PDF] Kristie Holmes Abstract: Geographic location has an effect on the perceived ease of friendship maintenance online and may reflect physical space. Participants from the Northeastern United States rated maintaining friendships online as more difficult than those from other regions. Those with the highest anxiety level ratings were from the largest and most densely populated areas (metropolitan) and those who were the least anxious about their image (both online and offline) were from rural areas with the least population density. Those residing in metropolitan areas were the most trusting of online information posted by others and the town/small city group were the least trusting of others' online posted information (similar to the urban group), making those from rural areas nearly as trusting of others' information as the metropolitan group, though probably the result of entirely different influences. A Comparison of Statistics for Assessing Model Invariance in Latent Class Analysis [PDF] Holmes Finch Open Journal of Statistics (OJS) , 2015, DOI: 10.4236/ojs.2015.53022 Abstract: Latent class analysis (LCA) is a widely used statistical technique for identifying subgroups in the population based upon multiple indicator variables. It has a number of advantages over other unsupervised grouping procedures such as cluster analysis, including stronger theoretical underpinnings, more clearly defined measures of model fit, and the ability to conduct confirmatory analyses. In addition, it is possible to ascertain whether an LCA solution is equally applicable to multiple known groups, using invariance assessment techniques. This study compared the effectiveness of multiple statistics for detecting group LCA invariance, including a chi-square difference test, a bootstrap likelihood ratio test, and several information indices. Results of the simulation study found that the bootstrap likelihood ratio test was the optimal invariance assessment statistic. In addition to the simulation, LCA group invariance assessment was demonstrated in an application with the Youth Risk Behavior Survey (YRBS). Implications of the simulation results for practice are discussed. Advantages of Joint Modeling of Component HIV Risk Behaviors and Non-Response: Application to Randomized Trials in Cocaine-Dependent and Methamphetamine-Dependent Populations Tyson H. Holmes,Shou-Hua Li Frontiers in Psychiatry , 2011, DOI: 10.3389/fpsyt.2011.00041 Abstract: The HIV risk-taking behavior scale (HRBS) is an 11-item instrument designed to assess the risks of HIV infection due to self-reported injection-drug use and sexual behavior. A retrospective analysis was performed on HRBS data collected from approximately 1,000 participants pooled across seven clinical trials of pharmacotherapies for either the treatment of cocaine dependence or methamphetamine dependence. Analysis faced three important challenges. The sample contained a high proportion of missing assessments after randomization. Also, the HRBS scale consists of two distinct behavioral components which may or may not coincide in response patterns. In addition, distributions of responses on the subscales were highly concentrated at just a few values (e.g., 0, 6). To address these challenges, a single probit regression model was fit to three outcomes variables simultaneously – the two subscale totals plus an indicator variable for assessments not obtained (non-response). This joint-outcome regression model was able to identify that those who left assessment early had higher self-reported risk of injection-drug use and lower self-reported risky sexual behavior because the model was able to draw on information on associations among the three outcomes collectively. These findings were not identified in analyses performed on each outcome separately. No evidence for an effect of pharmacotherapies was observed, except to reduce missing assessments. Univariate-outcome modeling is not recommended for the HRBS.
CommonCrawl
Response solutions to harmonic oscillators beyond multi–dimensional brjuno frequency Haoyu Li 1, and Zhi-Qiang Wang 2,3,, Center for Applied Mathematics, Tianjin University, Tianjin 300072, China College of Mathematics and Informatics, Fujian Normal University, Fuzhou 350117, China Department of Mathematics and Statistics, Utah State University, Logan, Utah 84322, USA Dedicated to Jiaquan Liu with admiration on the occasion of his 75th birthday Received September 2020 Revised October 2020 Published December 2020 Fund Project: This research is supported by CNSF(11771324, 11831009, 11811540026) For coupled Schrödinger equations with nonhomogeneous perturbations we give several results on the existence of multiple positive solutions. In particular in one case we consider perturbations of the permutation symmetry. Keywords: Coupled Schrödinger equations, nonhomogeneous perturbations, multiple positive solutions, parabolic flow, index theory. Mathematics Subject Classification: Primary: 35J47, 35J50; Secondary: 35K45. Citation: Haoyu Li, Zhi-Qiang Wang. Multiple positive solutions for coupled Schrödinger equations with perturbations. Communications on Pure & Applied Analysis, doi: 10.3934/cpaa.2020294 N. Ackermann and T. Bartsch, Superstable manifolds of semilinear parabolic problems, J. Dyn. Differ. Equ., 17 (2005), 115-173. doi: 10.1007/s10884-005-3144-z. Google Scholar S. Adachi and K. Tanaka, Four positive solutions for the semilinear elliptic equation: $-\Delta u+u = a(x)u^p+f(x)$ in $\mathbb{R}^N$, Calc. Var. Partial Differ. Equ., 11 (2000), 63-95. doi: 10.1007/s005260050003. Google Scholar S. Alarcón, Multiple solutions for a critical nonhomogeneous elliptic problem in domains with small holes, Commun. Pure Appl. Anal., 8 (2009), 1269-1289. doi: 10.3934/cpaa.2009.8.1269. Google Scholar A. Bahri and H. Berestycki, Existence of forced oscillations for some nonlinear differential equations, Commun. Pure Appl. Math., 37 (1984), 403-442. doi: 10.1002/cpa.3160370402. Google Scholar A. Bahri and P. L. Lions, Morse index of some min-max critical points. I. Application to multiplicity results, Commun. Pure Appl. Math., 41 (1988), 1027-1037. doi: 10.1002/cpa.3160410803. Google Scholar N. P. Bhatia and G. P. Szegö, Dynamical Systems: Stability Theory and Applications, Springer, Berlin, Heidelberg, 1967. doi: 10.1007/BFb0080630. Google Scholar D. M. Cao and H. S. Zhou, Multiple positive solutions of nonhomogeneous semilinear elliptic equations in $\mathbb{R}^N$, Commun. Pure Appl. Math., 41 (1988), 1027-1037. doi: 10.1017/S0308210500022836. Google Scholar T. Cazenave and P.L. Lions, Solutions globales d'equations de la chaleur semi lineaire, Commun. Partial Differ. Equ., 9 (1984), 955-978. doi: 10.1080/03605308408820353. Google Scholar K. C. Chang, Infinite-dimensional Morse Theory and Multiple Solution Problems, Birkhäuser Boston, Inc., Boston, MA, 1993. doi: 10.1007/978-1-4612-0385-8. Google Scholar K. C. Chang, Heat method in nonlinear elliptic equations, in Topological Methods, Variational Methods and Their Applications, World Sci., (2003), 65–76. Google Scholar K. C. Chang, Z. Q. Wang and T. Zhang, On a new index theory and non semi-trivial solutions for elliptic systems, Discrete Contin. Dyn. Syst., 28 (2010), 809-826. doi: 10.3934/dcds.2010.28.809. Google Scholar M. Conti, L. Merizzi and S. Terracini, Radial Solutions of Superlinear Equations on $\mathbb{R}^{N}$. Part I: Global Variational Approach, Arch. Ration. Mech. Anal., 153 (2000), 291-316. doi: 10.1007/s002050050015. Google Scholar D. Daners and P. K. Medina, Abstract Evolution Equations, Periodic Problems, and Applications, Longman Scientific and Technical, 1992. doi: 978-0582096356. Google Scholar E. N. Dancer, J. Wei and T. Weth, A priori bounds versus multiple existence of positive solutions for a nonlinear Schrödinger system, Ann. I. H. Poincare, 27 (2010), 953-969. doi: 10.1016/j.anihpc.2010.01.009. Google Scholar D. Henry, Geometric Theory of Semilinear Parabolic Equations, Springer-Verlag, Berlin, 2008. doi: 10.1007/BFb0089649. Google Scholar N. Hirano and W. Zou, A perturbation method for multiple sign-changing solutions, Calc. Var. Partial Differ. Equ., 37 (2010), 87-98. doi: 10.1007/s00526-009-0253-2. Google Scholar L. Jeanjean, Two positive solutions for a class of nonhomogeneous elliptic equations, Differ. Integral Equ., 10 (1997), 609-624. Google Scholar Y. Li, Z. Liu and C. Zhao, Nodal solutions of a perturbed elliptic problem, Topol. Methods Nonlinear Anal., 32 (2008), 49-68. Google Scholar H. Li and Z. Q. Wang, Multiple nodal solutions having shared componentwise nodal numbers for coupled Schrödinger equations, arXiv: 2005.13860v1. Google Scholar J. Liu, X. Liu and Z. Q. Wang, Multiple mixed states of nodal solutions for nonlinear Schrödinger systems, Calc. Var. Partial Differ. Equ., 52 (2015), 565-586. doi: 10.1007/s00526-014-0724-y. Google Scholar Z. Liu and Z. Q. Wang, Multiple bound states of nonlinear Schrödinger systems, Commun. Math. Phys., 282 (2008), 721-731. doi: 10.1007/s00220-008-0546-x. Google Scholar Z. Liu and Z. Q. Wang, Ground States and Bound States of a Nonlinear Schrödinger System, Adv. Nonlinear Stud., 10 (2010), 175-193. doi: 10.1515/ans-2010-0109. Google Scholar W. Long and S. Peng, Positive vector solutions for a Schrödinger system with external source terms, Nonlinear Differ. Equ. Appl., 27 (2020), 36 pp. doi: 10.1007/s00030-019-0608-0. Google Scholar P. Quittner, Boundedness of trajectories of parabolic equations and stationary solutions via dynamical methods, Differ. and Integral Equ., 7 (1994), 1547-1556. Google Scholar P. H. Rabinowitz, Multiple critical points of perturbed symmetric functionals, T. Am. Math. Soc., 272 (1982), 753-769. doi: 10.2307/1998726. Google Scholar P. H. Rabinowitz, Minimax Methods in Critical Point Theory with Applications to Differential Equations, American Mathematical Society, Providence, 1986. doi: 10.1090/cbms/065. Google Scholar K. Tanaka, Morse indices at critical points related to the symmetric mountain pass theorem and applications, Commun. Partial Differ. Equ., 14 (1989), 99-128. doi: 10.1080/03605308908820592. Google Scholar R. Tian and Z. Q. Wang, Multiple solitary wave solutions of nonlinear Schrödinger systems, Topo. Methods Nonlinear Anal., 37 (2011), 203-223. doi: 10.1016/j.matcom.2011.02.010. Google Scholar J. Wei and T. Weth, Radial Solutions and Phase Separation in a System of Two Coupled Schrödinger Equations, Arch. Ration. Mech. Anal., 190 (2008), 83-106. doi: 10.1007/s00205-008-0121-9. Google Scholar J. Wei and W. Yao, Uniqueness of positive solutions to some coupled nonlinear Schrödinger equations, Commun. Pure Appl. Anal., 11 (2012), 1003-1011. doi: 10.3934/cpaa.2012.11. Google Scholar X. Yue and W. Zou, Infinitely many solutions for the perturbed Bose-Einstein condensates system, Nonlinear Anal., 94 (2014), 171-184. doi: 10.1016/j.na.2013.08.012. Google Scholar Claudianor O. Alves, Rodrigo C. M. Nemer, Sergio H. Monari Soares. The use of the Morse theory to estimate the number of nontrivial solutions of a nonlinear Schrödinger equation with a magnetic field. Communications on Pure & Applied Analysis, 2021, 20 (1) : 449-465. doi: 10.3934/cpaa.2020276 Maoding Zhen, Binlin Zhang, Vicenţiu D. Rădulescu. Normalized solutions for nonlinear coupled fractional systems: Low and high perturbations in the attractive case. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020379 Van Duong Dinh. Random data theory for the cubic fourth-order nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020284 Kung-Ching Chang, Xuefeng Wang, Xie Wu. On the spectral theory of positive operators and PDE applications. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3171-3200. doi: 10.3934/dcds.2020054 Xin Zhong. Singularity formation to the nonhomogeneous magneto-micropolar fluid equations. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021021 Haoyu Li Zhi-Qiang Wang
CommonCrawl
Availability and requirements MetaMeta: integrating metagenome analysis tools to improve taxonomic profiling Vitor C. Piro1, 2, Marcel Matschkowski1 and Bernhard Y. Renard1Email author Microbiome20175:101 Many metagenome analysis tools are presently available to classify sequences and profile environmental samples. In particular, taxonomic profiling and binning methods are commonly used for such tasks. Tools available among these two categories make use of several techniques, e.g., read mapping, k-mer alignment, and composition analysis. Variations on the construction of the corresponding reference sequence databases are also common. In addition, different tools provide good results in different datasets and configurations. All this variation creates a complicated scenario to researchers to decide which methods to use. Installation, configuration and execution can also be difficult especially when dealing with multiple datasets and tools. We propose MetaMeta: a pipeline to execute and integrate results from metagenome analysis tools. MetaMeta provides an easy workflow to run multiple tools with multiple samples, producing a single enhanced output profile for each sample. MetaMeta includes a database generation, pre-processing, execution, and integration steps, allowing easy execution and parallelization. The integration relies on the co-occurrence of organisms from different methods as the main feature to improve community profiling while accounting for differences in their databases. In a controlled case with simulated and real data, we show that the integrated profiles of MetaMeta overcome the best single profile. Using the same input data, it provides more sensitive and reliable results with the presence of each organism being supported by several methods. MetaMeta uses Snakemake and has six pre-configured tools, all available at BioConda channel for easy installation (conda install -c bioconda metameta). The MetaMeta pipeline is open-source and can be downloaded at: https://gitlab.com/rki_bioinformatics. Taxonomic profiling A large and increasing number of metagenome analysis tools are presently available aiming to characterize environmental samples [1–4]. Motivated by the large amounts of data produced from whole metagenome shotgun (WMS) sequencing technologies, profiling of metagenomes has become more accessible, faster and applicable in real scenarios and tends to become the standard method for metagenomics analysis [5–7]. Tools which perform sequence classification based on WMS sequencing data come in different flavors. One basic approach is the de novo sequence assembly [8–10], which aims to reconstruct complete or near complete genomes from fragmented short sequences without any reference or prior knowledge. It is the method which provides the best resolution to assess the community composition. However, it is very difficult to produce meaningful assemblies from metagenomics data due to short read length, insufficient coverage, similar DNA sequences, and low abundant strains [11]. More commonly, methods use the WMS reads directly without assembly and are in general reference-based, meaning that they rely on previously obtained genome sequences to perform their analysis. In this category of applications, two standard definitions are employed: taxonomic profiling and binning tools. Profilers aim to analyze WMS sequences as a whole, predicting organisms and their relative abundances based on a given set of reference sequences. Binning tools aim to classify each sequence in a given sample individually, linking each one of them to the most probable organism of the reference set. Regardless of their conceptual differences, both groups of tools could be used to characterize microbial communities. Yet binning tools produce individual classification for each sequence and should be converted and normalized to be used as a taxonomic profiler. Methods available among these two categories make use of several techniques, e.g. read mapping, k-mer alignment, and composition analysis. Variations on the construction of the reference databases, e.g., complete genome sequences, marker genes, protein sequences, are also common. Many of those techniques were developed to overcome the computational cost of dealing with the high throughput of modern sequencing technologies as well as the large number of reference genome sequences available. The availability of several options for tools, parameters, databases, and techniques create a complicated scenario to researchers to decide which methods to use. Different tools provide good results in different scenarios, being more or less precise or sensitive in multiple configurations. It is hard to rely on their output for every study or sample variation. In addition, when more than one method is used, inconsistent results between tools using different reference sets are difficult to be integrated. Furthermore, installation, parameterization, and database creation as well as the lack of standard outputs are challenges not easily overcome. We propose MetaMeta, a new pipeline for the joint execution and integration of metagenomic sequence classification tools. MetaMeta has several strengths: easy installation and set-up, support for multiple tools, samples and databases, improved final profile combining multiple results, out-of-the-box parallelization and high performance computing (HPC) integration, automated database download and set-up, custom database creation, integrated pre-processing step (read trimming, error correction, and sub-sampling) as well as standardized rules for integration of new tools. MetaMeta achieves more sensitive profiling results than single tools alone by merging their correct identifications and properly filtering out false identifications. MetaMeta was built with SnakeMake [12] and is open-source. The pipeline has six pre-configured tools that are automatically installed using Conda through the BioConda channel (https://bioconda.github.io). We encourage the integration of new tools, making it available to the community through a participative Git repository (via pull request). MetaMeta source-code is available at: https://github.com/pirovc/metameta. MetaMeta executes and integrates metagenomic sequence classification tools. The integration is based on several tools' output profiles and aims to improve organism identification and quantification. An optional pre-processing and sub-sampling step is included. The pipeline is generalized for binning and profiling tools, categories that were previously described in the CAMI (Critical Assessment of Metagenome Interpretation) challenge (http://www.cami-challenge.org). MetaMeta provides a pre-defined set of standardized rules to facilitate the integration of tools, easy parallelization and execution in high performance computing infrastructure. The pre-configured tools are available at the BioConda channel to facilitate download and installation, avoiding set-up problems and broken dependencies. The pipeline accepts one or multiple WMS samples as well as one or more databases and the output is an integrated taxonomic profile for each sample per database (as well as a separated output from each executed tool). The MetaMeta pipeline can be described in four modules: database generation, pre-processing, tool execution, and integration (Fig. 1). MetaMeta Pipeline. The MetaMeta Pipeline: one or more WMS read samples and a configuration file are the input. The pipeline consists of four main modules: Database Generation (only on the first run), Pre-processing (optional), Tool Execution and Integration. The output is a unified taxonomic profile integrating the results from all configured tools for each sample, generated by the MetaMetaMerge module Database generation On the first run, the pipeline downloads and builds the databases for each of the configured tools. Pre-configured databases (Additional file 1: Table S1) are provided as well as a custom database creation option based on reference sequences. Since each tool has its own database with a specific version of reference sequences, database profiles are generated, collecting which taxonomic groups each tool can identify. Given a list of accession version identifiers for each sequence on the reference set, MetaMeta automatically generates a taxonomic profile for each tool's database. Pre-processing An optional pre-processing step is provided to remove errors and improve sequence classification: Trimommatic [13] for read trimming and BayesHammer [14] for error correction. A sub-sampling step is also included, allowing the sub division of large read sets among several tools by equally dividing them or by taking smaller random samples with or without replacement, to reduce overall run-time. Tool execution In this step, the pre-processed reads are analyzed by the configured tools. Tools can be added to the pipeline if they follow a minimum set of requirements. They should output their results based on the NCBI Taxonomy database [15] (by name or taxonomic id). Profiling tools should output a rank separated taxonomic profile with relative abundances while binning tools should provide an output with sequence id, length used in the assignment and taxon. The BioBoxes [16] data format for binning and profiling (https://github.com/bioboxes/rfc/tree/master/data-format) is directly accepted. Tools which provide non-standard output should be configured with an additional step converting their output to be correctly integrated into the pipeline (More details are given in the Additional file 1: File Formats). The integration step will merge identified taxonomic groups and abundances and provide a unified profile for each sample. MetaMeta aims to improve the final results based on the assumption that the more identifications of the same taxon by different tools are reported, the higher its chance to be correct. This task is performed by the MetaMetaMerge module. This module accepts binning and profiling results and relies on previously generated database profiles. Taxonomic classification can change over time and each tool can use a different version/definition of it. For that reason, a recent taxonomy database version is used to solve name and rank conflicts (e.g., changing name specification, species turning into sub-species, etc.). Abundance estimation - binning tools Binning tools provide a single classification for each sequence in the dataset instead of relative abundances for taxons. An abundance estimation step is necessary for a correct interpretation of such data and posterior integration. The lengths of the binned sequences are summed up for each identified taxonomic group and normalized by the length of their respective reference sequences, estimating the abundance for each identified taxon n as: $$ {abundance}_{n} = \sum_{i=1}^{r} \frac{\sum_{j=1}^{t_{i}}b_{j}}{l_{i}} $$ where r is the number of reference sequences belonging to the taxonomic group n, t i is the total of reads classified to the reference i, b j is the number of aligned bases of a read j and l i is the length of the reference i. The abundance of the parent nodes is based on the cumulative sum of their children nodes' abundance. Merging approach The first step on the merging approach is to normalize estimated abundances to 100% for each taxonomic level. That is necessary because some tools do account for the unclassified reads and others do not. MetaMetaMerge only considers classified reads. Once normalized, all profiles are then integrated to a single profile. In this step, MetaMetaMerge saves the number of occurrences of each taxon among all profiles. This occurrence count is used to better select taxons that are more often identified, assuming that they have higher chances of being a correct identification. MetaMetaMerge also calculates an integrated value for the relative abundance estimation, defined as the harmonic mean of all normalized abundances for each taxon, avoiding outliers and obtaining a general trend among the estimated abundances. All steps taken in the merging process are performed for each taxonomic level independently, from super kingdom to species by default. Since tools use different databases of reference sequences it is necessary to account for this bias. Previously generated database profiles provide which taxons are available for each tool. By merging all database profiles, it is possible to anticipate how many times each taxon could be identified among all tools used. The number of occurrences of each taxon from the tools' output and the database presence number are integrated to generate a score S for each taxon, defined as: $$ S_{ij} = \frac{(i+1)^{2}}{j+1} $$ where i is the number of times the taxon was identified and j the number of times it is contained in the databases. This score calculation accounts for the presence/absence of taxonomic groups on different databases. It gives higher scores to the most identified taxons present in more databases. At the same time, lower scores are assigned to taxons present in many databases but not identified too many times. The score calculation is purposely biased for higher scores when i=j (Additional file 1: Figure S1), given the benefit of the doubt for taxons with low identification that are available only in few databases. Commonly, metagenome analysis methods have to deal with a moderate to high number of false positive identifications at lower taxonomic levels. That occurs mainly because metagenomes can contain very low abundant organisms with similar genome sequences. This problem is even extended in our merged profile by collecting all false positives from different methods, generating a long tail of false positives with lower scores mixed together with true identifications. A filtering step is therefore necessary to avoid wrong assignments. This step is usually performed by an abundance cutoff value. Setting up this value is subject to uncertainty since the real abundances are usually not known and the separation between low abundant organisms and false identifications is not clear [17]. A simple cutoff would not provide a good separation between true and false results in this scenario. To overcome this problem, MetaMetaMerge classifies each taxon in a set of bins (four by default) based on the calculated score (Eq. 2). Bins are defined by equally dividing the range of scores in the numbers of bins selected. Now each taxon has a score and a bin assigned to it. Taxons with higher scores are more likely to be true identifications and are going to be grouped together in the same bin. With this strategy it is possible to obtain a general separation among taxons which are prone to be true or false identifications. Within each taxon grouped in a bin (sorted by relative abundance) a cutoff is applied to remove possible false identifications with low abundance. Here, the cutoff value is a percentile relative to the number of taxons on each bin and it is selected based on predefined functions, which can achieve more sensitive or precise results (Additional file 1: Mode functions). Each bin will have a different cutoff value depending on the chosen function. If precision is chosen, a gradually more stringent cutoff will be used, selecting only the most abundant taxa for each bin. If sensitivity is selected, cutoffs will be set higher, allowing more identifications to be kept. Sensitive results have an increased chance of containing more true positives but at the same time they will likely have more false identifications due to less strict cutoffs. Based on this percentile cutoff, MetaMetaMerge keeps only the top abundant taxa on each bin and removes taxons below it. After this step, the remaining taxons on each bin are re-grouped and sorted by relative abundance to generate the final profile. At the end, MetaMeta will provide a final taxonomic profile, integrating all tools results, a detailed profile with co-occurrence and individual abundances, an interactive Krona pie chart [18] to easily compare taxonomic abundances among the tools as well as single profiles for each executed tool. Tool selection MetaMeta was evaluated with a set of six tools: CLARK [19], DUDes [20], GOTTCHA [21], Kraken [22], Kaiju [23], and mOTUs [24]. The choice was partially motivated by recent publications comparing the performance of such tools [3, 4, 25]. CLARK, GOTTCHA, Kraken, and mOTUs achieved very low false positive numbers according to [4]. DUDes was an in-house developed tool which achieves good trade-off between precision and sensitivity according to [25]. Kaiju uses a translated database, bringing diversity to the current whole genome-based methods. We also considered the amount of data/run time performance for each tool, selecting only the ones that can handle large amounts of data as commonly used today in metagenomics analysis in an acceptable time (less than 1 day for our largest CAMI dataset −7.4 Gbp). MetaPhlAn [26] a widely used metagenomics tool could not be included due to taxonomic incompatibility. Any other sequence classification tool could be configured and used in MetaMeta, as long as it fits with our pipeline requirements described in the Methods - Tool execution section. We selected an equal number of tools for each category: DUDes, GOTTCHA, and mOTUs are taxonomic profiling tools, while CLARK, Kraken, and Kaiju are binning tools. Databases were created following the default guidelines for each tool, considering only bacteria and archaea as targets (Additional file 1: Table S1). Datasets and evaluation The pipeline was evaluated with a set of simulated and real samples (Table 1). The simulated data were provided as part of the CAMI Challenge (toy samples) and the real samples were obtained from the Human Microbiome Project (HMP) [27, 28]. MetaMeta was compared to each single result from each tool configured in the pipeline. Although the pipeline can work on the strain level, we evaluate the results until species levels since most of the tools still do not provide strain level identifications. We compare the results to the ground truth in a binary (true and false positives, sensitivity, and precision) and quantitative way with the L 1 norm, which is the sum of absolute differences between predicted and real abundances, when abundance profiles are available. Computer specifications and parameters can be found on the Additional file 1. Samples used in this study and run-time (based on the computer specifications on Additional file 1) # Samples # Species Cpu time/sample Estimated wall time/sample CAMI toy low CAMI toy medium CAMI toy high HMP stool 1.44 Tbp cpu time/sample stands for the mean cpu time for each sample without paralellization. Estimated wall time/sample considers a double speed-up by using 12 threads and concurrently running all six tools (when computational resources are available the pipeline can run all tools/samples/databases at the same time). *expected number of species from isolated genomes from the gastrointestinal tract CAMI data The CAMI challenge provided three toy datasets of different complexity (Table 1) with known composition and abundances. From low to high complexity, they provide an increasing number of organisms and samples. The samples within a complexity group contain the same organisms with variable abundances among samples. The sets contain real and simulated strains from complete and draft bacterial and archaeal genome sequences. The simulated CAMI datasets, especially those of medium and high complexity, provide a very challenging and realistic data in terms of complexity and size. In Fig. 2, it is possible to observe the tools performance in terms of true and false positives for the CAMI high complexity set. All configured tools perform similarly in the true positive identifications but vary among the false positives. Binning tools have a higher number of false positive identifications due to the fact that even single classified reads are considered. The MetaMetaMerge profile surpassed all other methods in true positive identifications while keeping the false positive number low. The same trend occurs in the other complexity sets (Additional file 1: Figures S3–S8). Figure 3 shows the trade-off between precision and sensitivity for all high complexity samples. MetaMetaMerge achieved the best sensitivity while GOTTCHA the best precision among the compared tools with default parameters. Those results show how the merging module of the MetaMeta pipeline is capable of better selecting and identifying true positives based on the co-occurrence information. MetaMetaMerge also has the flexibility to provide more precise or sensitive results (Fig. 3) just by changing the mode parameter (details are given in the Additional file 1: Mode functions). In the very precise mode, the merged profile outperformed all tools in terms of precision, but with the cost of losing sensitivity. In the very sensitive mode, the merged profile could improve the sensitivity compared to the run with default parameters, with some loss of precision. It is important to notice that the trade-off between precision and sensitivity could also be explored by the cutoff parameter (default 0.0001), depending on what is expected to be the lowest abundant organism in the sample. The MetaMetaMerge mode parameter will give more precise or sensitive results based on this cutoff value. True and False Positives - CAMI high complexity set. In blue (left y axis): True Positives. In red (right y axis): False Positives. Results at species level. Each marker represents one out of five samples from the CAMI high complexity set Precision and Sensitivity - CAMI high complexity set. Dotted black linemarks the maximum possible sensitivity value (0.57) that could be achieved with the given tools and databases. Results at species level. Each marker represents one out of five samples from the CAMI high complexity set In terms of relative abundance, MetaMetaMerge provides the most reliable predictions with smaller difference from the real abundances, as shown in Fig. 4 with regard to the L 1 norm measure. By taking the harmonic mean, we succeed in reducing the effect of outliers that occur among the tools and capture the trend of the estimated relative abundances, providing a new, more robust estimate (Additional file 2). L 1 norm error. Mean of the L 1 norm measure at each taxonomic level for five samples from the high complexity CAMI set Pre-processing and sub-sampling effects We explore here the effects of pre-processing and sub-sampling on the CAMI toy sets. Results shown in this section were trimmed and sub-sampled in several sizes, with and without replacement and executed five times for each sub-sample. Trimming effects were small on this set, slightly increasing precision (data not shown). Figure 5 shows the effects of sub-sampling in terms of sensitivity and run-time (wall time for the full pipeline) for one of the high complexity CAMI sets. Sub-sampling provides a high decrease on run-time for every tool and consequently for the whole pipeline. However, only below 5% it is possible to see a significant but still small decrease on sensitivity. All tools behave similarly on the sub-sampled sets, with GOTTCHA and mOTUs having a high decrease of sensitivity when using only 1% of the data. With the same sub-sample configuration (1%), MetaMetaMerge achieved a sensitivity higher than any other tool alone using 100% of the set. It also runs the whole pipeline approximately 17 times faster than with the full set (from 05 h 41 min 36 s to 20 min 19 s on average), being faster than the fastest tool with 100% of the data (kraken 29 min 26 s on average) and the second best sensitive tool (kaiju 1 h 47 min 44 s on average). As expected, precision is slightly increased in small sub-samples due to less data (Additional file 1: Figure S9). Sub-sampling. Sensitivity (left y axis) and run-time (right y axis) at species level for one randomly selected CAMI high complexity sample. Each sub-sample was executed five times. Linesrepresent the mean and the area around it the maximum and minimum achieved values. Run-time stands for the time to execute the MetaMeta pipeline. The evaluated sample sizes are 100, 50, 25, 16.6, 10, 5, and 1%. 16.6% is the exact division among six tools, using the the whole sample. Sub-samples above that value were taken with replacement and below without replacement. The plot is limited to a value of 0.57 (left y axis) that is the maximum possible sensitivity value that could be achieved with the given tools and databases Human Microbiome Project data The HMP provided several resources to characterize the microbial communities at different sites of the human body. MetaMeta was tested on stool samples to evaluate the performance of the pipeline on real data. For evaluation we used a list of reference genome sequences that were isolated from specific body sites and sequenced as part of the HMP. They do not represent the complete content of microbial diversity in each community but serve as a guide to check how well the tools are performing. Stool samples were compared against the isolated genomes obtained from the gastrointestinal tract. Figure 6 shows the results for 147 samples. In sensitive mode, MetaMetaMerge achieved the highest number of true positive identifications with a moderate number of false positives, below all binning tools but above all taxonomic profilers. mOTUs produced good results in the selected samples mainly because its database is based on the isolated genomes from the HMP (the same as the ground truth used here). Since mOTUs is the only tool with a distinct set of reference sequences that could classify this set, the scores (from Eq. 2) attributed to mOTUs' unique identifications were low. Still, MetaMetaMerge could improve the true identifications keeping a lower rate of false positives by incorporating the true identifications from other methods. True and False Positives - HMP stool samples. In blue (left y axis): True Positives. In red (right y axis): False Positives. Results at species level. Each marker represents one out of 147 stool samples from the HMP MetaMeta is a complete pipeline for classification of metagenomic datasets. It provides improved profiles over the tested tools by merging their results. In addition, the pipeline provides easy configuration, execution and parallelization. With simulated and real data, MetaMeta is capable to achieve higher sensitivity. That is possible due to the MetaMetaMerge module, which extracts information of co-occurrence of taxons on databases and profiles, collecting complementary results from different methods. Further, the guided cutoff approach avoids false positives and keeps most of the true identifications, enhancing final sensitivity and exploring the complementarity of currently available methods. By running several tools, MetaMeta has an apparently prohibitive execution time. In reality, the parallelization provided by Snakemake makes the pipeline run in a reasonable time using most of the computational resources (Table 1). That is possible by the way the rules are chained and executed among several cores, lasting not more than the slowest tool plus pre- and post-processing time, which are very small in comparison to the analysis time. In addition, sub-sampling allows the reduction of input data and a high decrease of execution time with small if any impact on the final result. That is viable due to redundant data contained in many metagenomic samples as well as redundant execution by several tools provided in the MetaMeta environment. However sub-sampling should be used with caution, taking in consideration the coverage of low abundant organisms. All tools presented here are available at the BioConda channel and are automatically installed in the first MetaMeta run, working out-of-the-box for several computer environments and avoiding conflicts and broken dependencies. MetaMeta can also handle multiple large samples and databases at the same time, with options to delete intermediate files and keep only necessary ones, being well suited to large scale projects. It also reduces idle computational time by smartly parallelizing samples among one or more tools (Additional file 1: Figures S10–S13). The parallelization noticeably decreases the run time when computational power is available and manages to serialize and control the run when access to computational power is limited. Integration into HPC systems is also possible and we provide a pre-configured file for queuing systems (e.g., slurm). As stated by Lee et al. [29], solid-state drives accelerate the run time of many bioinformatics tools. Such drives were used in some evaluations shown in this paper and are beneficial for the MetaMeta pipeline. MetaMeta makes it easier for the user to obtain more precise or sensitive results by providing a single default parameter as well as advanced options for more refined results. This parameter when set towards sensitivity tends to output an extensive list of taxons, being at the same time less stringent with the minimum abundance cutoff. When set towards precision it will apply a more strict abundance cutoff and provide a smaller but more accurate list of predicted taxons. Since all tools were used in default mode, it is possible to obtain problem-centric optimized results only by changing the way MetaMeta works. That facilitates and simplifies the task for researchers that are in search for a specific goal. MetaMeta supports strain level identification. Nevertheless all evaluations were made at species level due to lack of support to strain identification in some tools. Also the lack of standard was a limiting factor. Taxonomic IDs are no longer assigned to strain levels [30] and tools output them in different ways. With standard output definitions, the use of strain classification on the pipeline is straight forward. Related in parts, a method called WEVOTE was developed in parallel and recently published [31] where five classification tools were used to generate a merged taxonomic profile. Although the two methods present distinct ways of achieving better taxonomic profiling, they are not built for the same use case. WEVOTE relies on BLAST based tools and thereby is not suited for the large scale WMS applications, since the dataset sizes practically prohibit analyses via BLAST based approaches. Differently, MetaMeta was built accounting for high throughput data. Moreover, we supply an easy way to install tools and MetaMeta provides a complete pipeline which can configure databases and run classification tools with an integration module at the end, where WEVOTE provides only the integration method. As a result a comparison among the pipelines is hard to perform and interpret since they both use a different set of tools and databases. In conclusion, MetaMeta is an easy way to execute and generate improved taxonomic profiles for WMS samples with multiple tool support. We believe the method can be very useful for researchers that are dealing with multiple metagenomic samples and want to standardize their analysis. The MetaMeta pipeline was built in a way to facilitate the execution in many computational environments using Snakemake and BioConda. That diminishes the burden of installing and configuring multiple tools. The pipeline also gives control over the storage of the results and has an easy set of parameters which makes it possible to obtain more precise or sensitive results. MetaMeta was coded in a standardized manner, allowing easy expansion to more tools, also collectively in the MetaMeta git repository (https://github.com/pirovc/metameta). We believe that the final profile could be even further improved with novel tools configured into the pipeline. Project name: MetaMetaProject home page: https://github.com/pirovc/metametaOperating systems: LinuxProgramming language: PythonOther requirements: SnakemakeLicence: MIT CAMI: Critical assessment of metagenome interpretation HMP: Human Microbiome Project HPC: WMS: Whole metagenome shotgun We thank Enrico Seiler for proof-reading the manuscript and for technical support and Martin S. Lindner for fruitful discussions. This work was supported by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) - Ciencia sem Fronteiras [BEX 13472/13-5 to VCP] and by the German Federal Ministry of Health [IIA5-2512-FSB-725 to BYR]. VCP and BYR conceived the project and designed the methods. VCP developed the pipeline and MM led the sub-sampling analysis. VCP and BYR interpreted the data. VCP drafted the manuscript with contributions by MM and BYR. All authors read and approved the final manuscript. This manuscript does not report data collected from humans or animals. This manuscript does not contain any individual person's data in any form. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Additional file 1 Additional file with supplementary figures and information. (PDF 1024 kb) Additional file 2 Additional File with interactive charts for all CAMI toy set results on default, very-precise and very-sensitive mode. File prefix S, M, and H for low, medium and high complexity, respectively. (TAR 3573 kb) The software presented in this manuscript is available at: https://gitlab.com/rki_bioinformatics. Source-code available at: https://github.com/pirovc/metameta and https://github.com/pirovc/metametamerge. CAMI data sets are available at: http://data.cami-challenge.org/. HMP data sets are avaiable at NCBI Sequence Read Archive: https://www.ncbi.nlm.nih.gov/sraAccession numbers available at the Additional file 1. Research Group Bioinformatics (NG4), Robert Koch Institute, Nordufer 20, Berlin, 13353, Germany CAPES Foundation, Ministry of Education of Brazil, Brasília, 70040-020, DF, Brazil Bazinet AL, Cummings MP. A comparative evaluation of sequence classification programs. BMC Bioinforma. 2012; 13(1):92. doi:10.1186/1471-2105-13-92.View ArticleGoogle Scholar Pavlopoulos GA, Oulas A, Pavloudi C, Polymenakou P, Papanikolaou N, Kotoulas G, Arvanitidis C, Iliopoulos I. Metagenomics: tools and insights for analyzing next-generation sequencing data derived from biodiversity studies. Bioinforma Biol Insights. 2015;75. doi:10.4137/BBI.S12462. Peabody MA, Van Rossum T, Lo R, Brinkman FSL. Evaluation of shotgun metagenomics sequence classification methods using in silico and in vitro simulated communities. BMC Bioinforma. 2015; 16(1):362. doi:10.1186/s12859-015-0788-5.View ArticleGoogle Scholar Lindgreen S, Adair KL, Gardner PP. An evaluation of the accuracy and speed of metagenome analysis tools. Sci Rep. 2016; 6:19233. doi:10.1038/srep19233.View ArticlePubMedPubMed CentralGoogle Scholar Köser CU, Ellington MJ, Cartwright EJP, Gillespie SH, Brown NM, Farrington M, Holden MTG, Dougan G, Bentley SD, Parkhill J, Peacock SJ. Routine use of microbial whole genome sequencing in diagnostic and public health microbiology. PLoS Pathog. 2012; 8(8):1002824. doi:10.1371/journal.ppat.1002824.View ArticleGoogle Scholar Pallen MJ. Diagnostic metagenomics: potential applications to bacterial, viral and parasitic infections. Parasitology. 2014; 141(14):1856–62. doi:10.1017/S0031182014000134.View ArticlePubMedPubMed CentralGoogle Scholar Fricke WF, Rasko DA. Bacterial genome sequencing in the clinic: bioinformatic challenges and solutions. Nat Rev Genet. 2014; 15(1):49–55. doi:10.1038/nrg3624.View ArticlePubMedGoogle Scholar Namiki T, Hachiya T, Tanaka H, Sakakibara Y. MetaVelvet: an extension of Velvet assembler to de novo metagenome assembly from short sequence reads. Nucleic Acids Res. 2012; 40(20):155–5. doi:10.1093/nar/gks678.View ArticleGoogle Scholar Peng Y, Leung HCM, Yiu SM, Chin FYL. IDBA-UD: a de novo assembler for single-cell and metagenomic sequencing data with highly uneven depth. Bioinformatics. 2012; 28(11):1420–8. doi:10.1093/bioinformatics/bts174. Li D, Liu CM, Luo R, Sadakane K, Lam TW. MEGAHIT: an ultra-fast single-node solution for large and complex metagenomics assembly via succinct de Bruijn graph. Bioinformatics. 2015; 31(10):1674–6. doi:10.1093/bioinformatics/btv033.View ArticlePubMedGoogle Scholar Howe A, Chain PSG. Challenges and opportunities in understanding microbial communities with metagenome assembly (accompanied by IPython Notebook tutorial). Front Microbiol. 2015;6. doi:10.3389/fmicb.2015.00678. Koster J, Rahmann S. Snakemake–a scalable bioinformatics workflow engine. Bioinformatics. 2012; 28(19):2520–2. doi:10.1093/bioinformatics/bts480. Bolger AM, Lohse M, Usadel B. Trimmomatic: a flexible trimmer for Illumina sequence data. Bioinformatics. 2014; 30(15):2114–20. doi:10.1093/bioinformatics/btu170.View ArticlePubMedPubMed CentralGoogle Scholar Nikolenko SI, Korobeynikov AI, Alekseyev MA. BayesHammer: Bayesian clustering for error correction in single-cell sequencing. BMC Genomics. 2013; 14(Suppl 1):7. doi:10.1186/1471-2164-14-S1-S7.View ArticleGoogle Scholar Federhen S. The NCBI Taxonomy database. Nucleic Acids Res. 2012; 40(D1):136–43. doi:10.1093/nar/gkr1178.View ArticleGoogle Scholar Belmann P, Dröge J, Bremges A, McHardy AC, Sczyrba A, Barton MD. Bioboxes: standardised containers for interchangeable bioinformatics software. GigaScience. 2015; 4(1):47. doi:10.1186/s13742-015-0087-0.View ArticlePubMedPubMed CentralGoogle Scholar Zepeda Mendoza ML, Sicheritz-Pontén T, Gilbert MTP. Environmental genes and genomes: understanding the differences and challenges in the approaches and software for their analyses. Brief Bioinform. 2015; 16(5):745–58. doi:10.1093/bib/bbv001.View ArticlePubMedPubMed CentralGoogle Scholar Ondov BD, Bergman NH, Phillippy AM. Interactive metagenomic visualization in a Web browser. BMC Bioinforma. 2011; 12(1):385. doi:10.1186/1471-2105-12-385.View ArticleGoogle Scholar Ounit R, Wanamaker S, Close TJ, Lonardi S. CLARK: fast and accurate classification of metagenomic and genomic sequences using discriminative k-mers. BMC Genomics. 2015; 16(1):236. doi:10.1186/s12864-015-1419-2.View ArticlePubMedPubMed CentralGoogle Scholar Piro VC, Lindner MS, Renard BY. DUDes: a top-down taxonomic profiler for metagenomics. Bioinformatics. 2016; 32(15):2272–80. doi:10.1093/bioinformatics/btw150.View ArticlePubMedGoogle Scholar Freitas TAK, Li PE, Scholz MB, Chain PSG. Accurate read-based metagenome characterization using a hierarchical suite of unique signatures. Nucleic Acids Res. 2015; 43(10):69–9. doi:10.1093/nar/gkv180.View ArticleGoogle Scholar Wood DE, Salzberg SL. Kraken: ultrafast metagenomic sequence classification using exact alignments. Genome Biol. 2014; 15(3):46. doi:10.1186/gb-2014-15-3-r46.View ArticleGoogle Scholar Menzel P, Ng KL, Krogh A. Fast and sensitive taxonomic classification for metagenomics with Kaiju. Nat Commun. 2016; 7:11257. doi:10.1038/ncomms11257.View ArticlePubMedPubMed CentralGoogle Scholar Sunagawa S, Mende DR, Zeller G, Izquierdo-Carrasco F, Berger SA, Kultima JR, Coelho LP, Arumugam M, Tap J, Nielsen HB, Rasmussen S, Brunak S, Pedersen O, Guarner F, de Vos WM, Wang J, Li J, Doré J, Ehrlich SD, Stamatakis A, Bork P. Metagenomic species profiling using universal phylogenetic marker genes. Nat Methods. 2013; 10(12):1196–9. doi:10.1038/nmeth.2693.View ArticlePubMedGoogle Scholar Sczyrba A, Hofmann P, Belmann P, Koslicki D, Janssen S, Droege J, Gregor I, Majda S, Fiedler J, Dahms E, Bremges A, Fritz A, Garrido-Oter R, Sparholt Jorgensen T, Shapiro N, Blood PD, Gurevich A, Bai Y, Turaev D, DeMaere MZ, Chikhi R, Nagarajan N, Quince C, Hestbjerg Hansen L, Sorensen SJ, Chia BKH, Denis B, Froula JL, Wang Z, Egan R, Kang DD, Cook JJ, Deltel C, Beckstette M, Lemaitre C, Peterlongo P, Rizk G, Lavenier D, Wu YW, Singer SW, Jain C, Strous M, Klingenberg H, Meinicke P, Barton M, Lingner T, Lin HH, Liao YC, Gueiros Z, Silva G, Cuevas DA, Edwards RA, Saha S, Piro VC, Renard BY, Pop M, Klenk HP, Goeker M, Kyrpides N, Woyke T, Vorholt JA, Schulze-Lefert P, Rubin EM, Darling AE, Rattei T, McHardy AC. Critical assessment of metagenome interpretation - a benchmark of computational metagenomics software. bioRxiv. 2017. doi:10.1101/099127. Truong DT, Franzosa EA, Tickle TL, Scholz M, Weingart G, Pasolli E, Tett A, Huttenhower C, Segata N. MetaPhlAn2 for enhanced metagenomic taxonomic profiling. Nat Methods. 2015; 12(10):902–3.View ArticlePubMedGoogle Scholar Methé BAEA. A framework for human microbiome research. Nature. 2012; 486(7402):215–21. doi:10.1038/nature11209.View ArticlePubMed CentralGoogle Scholar Huttenhower CEA. Structure, function and diversity of the healthy human microbiome. Nature. 2012; 486(7402):207–14. doi:10.1038/nature11234.View ArticleGoogle Scholar Lee S, Min H, Yoon S. Will solid-state drives accelerate your bioinformatics? In-depth profiling, performance analysis and beyond. Brief Bioinform. 2016; 17(4):713–27. doi:10.1093/bib/bbv073.View ArticlePubMedGoogle Scholar Federhen S, Clark K, Barrett T, Parkinson H, Ostell J, Kodama Y, Mashima J, Nakamura Y, Cochrane G, Karsch-Mizrachi I. Toward richer metadata for microbial sequences: replacing strain-level NCBI taxonomy taxids with BioProject, BioSample and Assembly records. Standards Genomic Sci. 2014; 9(3):1275–7. doi:10.4056/sigs.4851102.View ArticleGoogle Scholar Metwally AA, Dai Y, Finn PW, Perkins DL. WEVOTE: Weighted voting taxonomic identification method of microbial sequences. PLOS ONE. 2016; 11(9):0163527. doi:10.1371/journal.pone.0163527.
CommonCrawl
The Annals of Statistics Ann. Statist. Volume 46, Number 6B (2018), 3217-3245. Approximate $\ell_{0}$-penalized estimation of piecewise-constant signals on graphs Zhou Fan and Leying Guan More by Zhou Fan More by Leying Guan We study recovery of piecewise-constant signals on graphs by the estimator minimizing an $l_{0}$-edge-penalized objective. Although exact minimization of this objective may be computationally intractable, we show that the same statistical risk guarantees are achieved by the $\alpha$-expansion algorithm which computes an approximate minimizer in polynomial time. We establish that for graphs with small average vertex degree, these guarantees are minimax rate-optimal over classes of edge-sparse signals. For spatially inhomogeneous graphs, we propose minimization of an edge-weighted objective where each edge is weighted by its effective resistance or another measure of its contribution to the graph's connectivity. We establish minimax optimality of the resulting estimators over corresponding edge-weighted sparsity classes. We show theoretically that these risk guarantees are not always achieved by the estimator minimizing the $l_{1}$/total-variation relaxation, and empirically that the $l_{0}$-based estimates are more accurate in high signal-to-noise settings. Ann. Statist., Volume 46, Number 6B (2018), 3217-3245. Received: April 2017 Revised: September 2017 First available in Project Euclid: 11 September 2018 https://projecteuclid.org/euclid.aos/1536631272 doi:10.1214/17-AOS1656 Primary: 62G05: Estimation Approximation algorithm graph cut effective resistance total-variation denoising Fan, Zhou; Guan, Leying. Approximate $\ell_{0}$-penalized estimation of piecewise-constant signals on graphs. Ann. Statist. 46 (2018), no. 6B, 3217--3245. doi:10.1214/17-AOS1656. https://projecteuclid.org/euclid.aos/1536631272 [1] Addario-Berry, L., Broutin, N., Devroye, L. and Lugosi, G. (2010). On combinatorial testing problems. Ann. Statist. 38 3063–3092. Digital Object Identifier: doi:10.1214/10-AOS817 Project Euclid: euclid.aos/1283175989 [2] Arias-Castro, E., Candès, E. J. and Durand, A. (2011). Detection of an anomalous cluster in a network. Ann. Statist. 39 278–304. [3] Arias-Castro, E., Candès, E. J., Helgason, H. and Zeitouni, O. (2008). Searching for a trail of evidence in a maze. Ann. Statist. 36 1726–1757. [4] Arias-Castro, E., Donoho, D. L. and Huo, X. (2005). Near-optimal detection of geometric objects by fast multiscale methods. IEEE Trans. Inform. Theory 51 2402–2425. Digital Object Identifier: doi:10.1109/TIT.2005.850056 [5] Arias-Castro, E. and Grimmett, G. R. (2013). Cluster detection in networks using percolation. Bernoulli 19 676–719. Digital Object Identifier: doi:10.3150/11-BEJ412 Project Euclid: euclid.bj/1363192043 [6] Auger, I. E. and Lawrence, C. E. (1989). Algorithms for the optimal identification of segment neighborhoods. Bull. Math. Biol. 51 39–54. [7] Barron, A., Birgé, L. and Massart, P. (1999). Risk bounds for model selection via penalization. Probab. Theory Related Fields 113 301–413. [8] Barry, D. and Hartigan, J. A. (1993). A Bayesian analysis for change point problems. J. Amer. Statist. Assoc. 88 309–319. [9] Besag, J. (1986). On the statistical analysis of dirty pictures. J. Roy. Statist. Soc. Ser. B 48 259–302. Digital Object Identifier: doi:10.1111/j.2517-6161.1986.tb01412.x [10] Bickel, P. J., Ritov, Y. and Tsybakov, A. B. (2009). Simultaneous analysis of Lasso and Dantzig selector. Ann. Statist. 37 1705–1732. [11] Birgé, L. and Massart, P. (2007). Minimal penalties for Gaussian model selection. Probab. Theory Related Fields 138 33–73. Digital Object Identifier: doi:10.1007/s00440-006-0011-8 [12] Boykov, Y. and Kolmogorov, V. (2004). An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Trans. Pattern Anal. Mach. Intell. 26 1124–1137. [13] Boykov, Y., Veksler, O. and Zabih, R. (2001). Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 23 1222–1239. [14] Boysen, L., Kempe, A., Liebscher, V., Munk, A. and Wittich, O. (2009). Consistencies and rates of convergence of jump-penalized least squares estimators. Ann. Statist. 37 157–183. [15] Chambolle, A. (2005). Total variation minimization and a class of binary MRF models. In EMMCVPR 2005 136–152. Springer, Berlin. [16] Chambolle, A. and Lions, P.-L. (1997). Image recovery via total variation minimization and related problems. Numer. Math. 76 167–188. [17] Chen, S. S., Donoho, D. L. and Saunders, M. A. (2001). Atomic decomposition by basis pursuit. SIAM Rev. 43 129–159. Digital Object Identifier: doi:10.1137/S003614450037906X [18] Chernoff, H. and Zacks, S. (1964). Estimating the current mean of a normal distribution which is subjected to changes in time. Ann. Math. Stat. 35 999–1018. Digital Object Identifier: doi:10.1214/aoms/1177700517 Project Euclid: euclid.aoms/1177700517 [19] Dalalyan, A. S., Hebiri, M. and Lederer, J. (2017). On the prediction performance of the Lasso. Bernoulli 23 552–581. [20] Darbon, J. and Sigelle, M. (2005). A fast and exact algorithm for total variation minimization. In Iberian Conference on Pattern Recognition and Image Analysis 351–359. Springer, Berlin. [21] Davies, P. L. and Kovac, A. (2001). Local extremes, runs, strings and multiresolution. Ann. Statist. 29 1–65. Digital Object Identifier: doi:10.1214/aos/996986501 Project Euclid: euclid.aos/996986501 [22] Donoho, D. L. (1999). Wedgelets: Nearly minimax estimation of edges. Ann. Statist. 27 859–897. Digital Object Identifier: doi:10.1214/aos/1018031261 [23] Donoho, D. L. and Johnstone, I. M. (1994). Ideal spatial adaptation by wavelet shrinkage. Biometrika 81 425–455. Digital Object Identifier: doi:10.1093/biomet/81.3.425 [24] Efron, B. (2004). Large-scale simultaneous hypothesis testing: The choice of a null hypothesis. J. Amer. Statist. Assoc. 99 96–104. Digital Object Identifier: doi:10.1198/016214504000000089 [25] Fan, Z. and Guan, L. (2018). Supplement to "Approximate $\ell_{0}$-penalized estimation of piecewise-constant signals on graphs." DOI:10.1214/17-AOS1656SUPP. [26] Geman, S. and Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 6 721–741. Digital Object Identifier: doi:10.1109/TPAMI.1984.4767596 [27] Ghosh, A., Boyd, S. and Saberi, A. (2008). Minimizing effective resistance of a graph. SIAM Rev. 50 37–66. Digital Object Identifier: doi:10.1137/050645452 [28] Goldstein, T. and Osher, S. (2009). The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2 323–343. [29] Greig, D. M., Porteous, B. T. and Seheult, A. H. (1989). Exact maximum a posteriori estimation for binary images. J. R. Stat. Soc. Ser. B. Stat. Methodol. 51 271–279. [30] Guntuboyina, A., Lieu, D., Chatterjee, S. and Sen, B. (2017). Spatial adaptation in trend filtering. Available at arXiv:1702.05113. arXiv: 1702.05113 [31] Harchaoui, Z. and Lévy-Leduc, C. (2010). Multiple change-point estimation with a total variation penalty. J. Amer. Statist. Assoc. 105 1480–1493. Digital Object Identifier: doi:10.1198/jasa.2010.tm09181 [32] Harris, X. T. (2016). Prediction error after model search. Available at arXiv:1610.06107. [33] Hoefling, H. (2010). A path algorithm for the fused lasso signal approximator. J. Comput. Graph. Statist. 19 984–1006. Supplementary materials available online. [34] Hütter, J.-C. and Rigollet, P. (2016). Optimal rates for total variation denoising. In Conf. Learning Theory 1115–1146. [35] Jackson, B., Scargle, J. D. et al. (2005). An algorithm for optimal partitioning of data on an interval. IEEE Signal Process. Lett. 12 105–108. [36] Johnstone, I. (2015). Gaussian Estimation: Sequence and Wavelet Models. Available at statweb.stanford.edu/~imj/GE09-08-15.pdf. [37] Karger, D. R. and Stein, C. (1996). A new approach to the minimum cut problem. J. ACM 43 601–640. Digital Object Identifier: doi:10.1145/234533.234534 [38] Killick, R., Fearnhead, P. and Eckley, I. A. (2012). Optimal detection of changepoints with a linear computational cost. J. Amer. Statist. Assoc. 107 1590–1598. Digital Object Identifier: doi:10.1080/01621459.2012.737745 [39] Kolmogorov, V. and Zabin, R. (2004). What energy functions can be minimized via graph cuts? IEEE Trans. Pattern Anal. Mach. Intell. 26 147–159. [40] Korostelev, A. P. and Tsybakov, A. B. (1993). Minimax Theory of Image Reconstruction. Lecture Notes in Statistics 82. Springer, New York. [41] Kovac, A. and Smith, A. D. (2011). Nonparametric regression on a graph. J. Comput. Graph. Statist. 20 432–447. [42] Land, S. R. and Friedman, J. H. (1997). Variable fusion: A new adaptive signal regression method. Technical Report 656, Dept. Statistics, Carnegie Mellon Univ., Pittsburgh, PA. [43] Lebarbier, É. (2005). Detecting multiple change-points in the mean of Gaussian process by model selection. Signal Process. 85 717–736. Digital Object Identifier: doi:10.1016/j.sigpro.2004.11.012 [44] Lin, K., Sharpnack, J., Rinaldo, A. and Tibshirani, R. J. (2016). Approximate recovery in changepoint problems, from $\ell_{2}$ estimation error rates. Available at arXiv:1606.06746. [45] Livne, O. E. and Brandt, A. (2012). Lean algebraic multigrid (LAMG): Fast graph Laplacian linear solver. SIAM J. Sci. Comput. 34 B499–B522. [46] Lovász, L. (1996). Random walks on graphs: A survey. In Combinatorics: Paul Erdős Is Eighty, Vol. 2 (Keszthely, 1993). Bolyai Soc. Math. Stud. 2 353–397. János Bolyai Math. Soc., Budapest. [47] Madrid Padilla, O. H., Scott, J. G., Sharpnack, J. and Tibshirani, R. J. (2016). The DFS fused lasso: Nearly optimal linear-time denoising over graphs and trees. Available at arXiv:1608.03384. [48] Mammen, E. and van de Geer, S. (1997). Locally adaptive regression splines. Ann. Statist. 25 387–413. [49] Moore, C. and Newman, M. E. (2000). Epidemics and percolation in small-world networks. Phys. Rev. E 61 5678–5682. [50] Mumford, D. and Shah, J. (1989). Optimal approximations by piecewise smooth functions and associated variational problems. Comm. Pure Appl. Math. 42 577–685. Digital Object Identifier: doi:10.1002/cpa.3160420503 [51] Rinaldo, A. (2009). Properties and refinements of the fused lasso. Ann. Statist. 37 2922–2952. [52] Rudin, L. I., Osher, S. and Fatemi, E. (1992). Nonlinear total variation based noise removal algorithms. Phys. D 60 259–268. Experimental mathematics: Computational issues in nonlinear science (Los Alamos, NM, 1991). Digital Object Identifier: doi:10.1016/0167-2789(92)90242-F [53] Sadhanala, V., Wang, Y.-X. and Tibshirani, R. (2016). Graph sparsification approaches for Laplacian smoothing. In Int. Conf. Artific. Intell. Statist. 1250–1259. [54] Sadhanala, V., Wang, Y.-X. and Tibshirani, R. J. (2016). Total variation classes beyond 1d: Minimax rates, and the limitations of linear smoothers. In Adv. Neural Inform. Process. Syst. 3513–3521. [55] Sharpnack, J., Rinaldo, A. and Singh, A. (2012). Sparsistency of the edge lasso over graphs. In Int. Conf. Artific. Intell. Statist. 1028–1036. [56] Sharpnack, J., Singh, A. and Rinaldo, A. (2013). Detecting activations over graphs using spanning tree wavelet bases. In Int. Conf. Artific. Intell. Statist. 545–553. [57] Sharpnack, J. L., Krishnamurthy, A. and Singh, A. (2013). Near-optimal anomaly detection in graphs using Lovasz extended scan statistic. In Adv. Neural Inform. Process. Syst. 1959–1967. [58] Spielman, D. A. and Srivastava, N. (2011). Graph sparsification by effective resistances. SIAM J. Comput. 40 1913–1926. [59] Spielman, D. A. and Teng, S.-H. (2004). Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. In ACM Symp. Theory Comput. 81–90. ACM, New York. [60] Tansey, W. and Scott, J. G. (2015). A fast and flexible algorithm for the graph-fused lasso. Available at arXiv:1505.06475. [61] Tian, X. and Taylor, J. E. (2015). Selective inference with a randomized response. Available at arXiv:1507.06739. Digital Object Identifier: doi:10.1214/17-AOS1564 [62] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B. Stat. Methodol. 58 267–288. [63] Tibshirani, R., Saunders, M., Rosset, S., Zhu, J. and Knight, K. (2005). Sparsity and smoothness via the fused lasso. J. R. Stat. Soc. Ser. B. Stat. Methodol. 67 91–108. Digital Object Identifier: doi:10.1111/j.1467-9868.2005.00490.x [64] Tibshirani, R. J. and Taylor, J. (2011). The solution path of the generalized lasso. Ann. Statist. 39 1335–1371. [65] van de Geer, S. A. and Bühlmann, P. (2009). On the conditions used to prove oracle results for the Lasso. Electron. J. Stat. 3 1360–1392. Digital Object Identifier: doi:10.1214/09-EJS506 [66] Wang, Y.-X., Sharpnack, J., Smola, A. J. and Tibshirani, R. J. (2016). Trend filtering on graphs. J. Mach. Learn. Res. 17 Paper No. 105. [67] Winkler, G. and Liebscher, V. (2002). Smoothers for discontinuous signals. J. Nonparametr. Stat. 14 203–222. Digital Object Identifier: doi:10.1080/10485250211388 [68] Xin, B., Kawahara, Y., Wang, Y. and Gao, W. (2014). Efficient generalized fused lasso and its application to the diagnosis of Alzheimer's disease. In Proc. Assoc. Adv. Artific. Intell. Conf. 2163–2169. [69] Yao, Y.-C. (1984). Estimation of a noisy discrete-time step function: Bayes and empirical Bayes approaches. Ann. Statist. 12 1434–1447. [70] Yao, Y.-C. (1988). Estimating the number of change-points via Schwarz' criterion. Statist. Probab. Lett. 6 181–189. Digital Object Identifier: doi:10.1016/0167-7152(88)90118-6 [71] Yao, Y.-C. and Au, S.-T. (1989). Least-squares estimation of a step function. Sankhyā Ser. A 51 370–381. [72] Zhang, Y., Wainwright, M. J. and Jordan, M. I. (2014). Lower bounds on the performance of polynomial-time algorithms for sparse linear regression. In Conf. Learning Theory 35 1–28. [73] Zhang, Y., Wainwright, M. J. and Jordan, M. I. (2017). Optimal prediction for sparse linear models? Lower bounds for coordinate-separable M-estimators. Electron. J. Stat. 11 752–799. Digital Object Identifier: doi:10.1214/17-EJS1233 Supplementary Appendices. The supplementary appendices contain proofs of theoretical results. Digital Object Identifier: doi:10.1214/17-AOS1656SUPP Supplemental files are immediately available to subscribers. Non-subscribers gain access to supplemental files with the purchase of the article. First Online Wedgelets: nearly minimax estimation of edges Donoho, David L., The Annals of Statistics, 1999 Function estimation via wavelet shrinkage for long-memory data Wang, Yazhen, The Annals of Statistics, 1996 Isotonic regression in general dimensions Han, Qiyang, Wang, Tengyao, Chatterjee, Sabyasachi, and Samworth, Richard J., The Annals of Statistics, 2019 A two stage $k$-monotone B-spline regression estimator: Uniform Lipschitz property and optimal convergence rate Lebair, Teresa M. and Shen, Jinglai, Electronic Journal of Statistics, 2018 Multiscale change-point segmentation: beyond step functions Li, Housen, Guo, Qinghai, and Munk, Axel, Electronic Journal of Statistics, 2019 Exact minimax estimation of the predictive density in sparse Gaussian models Mukherjee, Gourab and Johnstone, Iain M., The Annals of Statistics, 2015 Oracle inequalities for network models and sparse graphon estimation Klopp, Olga, Tsybakov, Alexandre B., and Verzelen, Nicolas, The Annals of Statistics, 2017 $\ell_{0}$-penalized maximum likelihood for sparse directed acyclic graphs van de Geer, Sara and Bühlmann, Peter, The Annals of Statistics, 2013 Sharp minimax adaptation over Sobolev ellipsoids in nonparametric testing Ji, Pengsheng and Nussbaum, Michael, Electronic Journal of Statistics, 2017 Efficient estimation of a density in a problem of tomography Cavalier, Laurent, The Annals of Statistics, 2000 euclid.aos/1536631272
CommonCrawl
Are there ways to automatically (no human testing) measure a $9 \times 9$ Sudoku puzzle's average hardness for a human to solve? So most resources providing Sudoku puzzles assign a difficulty category to each puzzle, even some I've seen with 15 or more difficulty categories. But what is a good way to assign these difficulty categories? If enough human puzzle solvers were used, the average time for a human to complete a puzzle and the percentage of people who successfully solved the puzzle could be computed for the human sample, and difficulty categories assigned accordingly. But it seems like there should be predictable scenarios that keep appearing as various puzzles are being solved that affect the average human difficulty, which could be automatically detected as a computer solves the puzzle and then these patterns could be assembled into a predicted average difficulty for humans. Are there / what are good techniques to do this? Maybe machine learning with enough training data of human performance on sample puzzles? algorithms machine-learning board-games sudoku $\begingroup$ For a non-statistical approach you'd need some idea of what this "hardness" is resp. which parameters of the puzzle it is tied to (and how). $\endgroup$ There are have been many such attempts. Most of them try to derive deduction rules which humans seem to use to solve Sudoku puzzles. My money is on this approach: Mária Ercsey-Ravasz & Zoltán Toroczkai (2012), The Chaos within Sudoku, Scientific Reports 2:75, Nature. The idea is based around the notion of transient chaos in a dynamical system. In dynamical systems, when you change the state, there is often a period of time where a transient signal dominates, before the system manages to get itself into a steady state solution. If the dynamical system is nonlinear, the transient may be chaotic. This is different from a chaotic attractor (e.g. the Lorentz attractor), in that the steady state that the system is eventually drawn to isn't chaotic, just the transients. The time that the system takes for the transients to die down is called the escape rate. What the authors found is that if you turn the puzzle into a SAT problem, and then turn that into an equivalent dynamical system (where the attractor of the dynamical system is the solution to the SAT problem), the escape rate of the system correlates quite well with how Sudoku puzzles are rated for hardness. The interesting thing about this approach is that it's independent of Sudoku. Any puzzle which can be turned into a SAT problem fairly directly (the SAT problem and the puzzle can't diverge too far) can be analysed in the same way. PseudonymPseudonym $\begingroup$ This is a very interesting approach, thanks for the reference. I don't understand why it should work so well, and it's also not clear to me how hard it is to calculate the escape rate (except maybe estimation by simulation), but I'll take a look at the reference and maybe it will come together. $\endgroup$ If you want to measure the hardness of a puzzle "for a human", a typical approach is the inference techniques needed to complete the puzzle. For instance, you can categorize inference strategies as basic, intermediate, and expert. These methods can all be implemented on a computer. If the computer can solve the puzzle with just basic methods, the puzzle is easy. If intermediate techniques are needed, the puzzle is medium. Otherwise, the computer needs expert strategies, and the puzzle is hard. I have seen a program that classifies puzzles based on this approach, but can't seem to recall the exact reference. All the puzzles that can be solved with just inference (no guessing needed), will be easy for a computer. If you are interested in puzzles hard for a computer, one basic rule is that they need to resort to guessing at some point. Generation of hard Sudoku puzzles (for a computer) have been investigated too, and you can easily find relevant papers. JuhoJuho $\begingroup$ This seems like a good approach if the 15 or so inference rules used by humans could be assigned a hardness score based on training data of humans solving puzzles, and then the puzzle difficulty could either be the sum or the maximum of the hardness for the sequence of rules required to solve the puzzle. The hard part to me seems to be how do you infer the hardness score for each individual inference rule given solution times for entire puzzles when humans solve them. Also some rules should have variable hardness, like "alternating inference chain" where it gets harder the longer the chain is. $\endgroup$ $\begingroup$ "Sudoku as a Constraint Problem" by Helmut Simonis (4c.ucc.ie/~hsimonis/sudoku.pdf) is an article that grades Sudoku puzzles based on the inferences needed to solve the puzzles without search. $\endgroup$ – Zayenz Not the answer you're looking for? Browse other questions tagged algorithms machine-learning board-games sudoku or ask your own question. Are there improvements on Dana Angluin's algorithm for learning regular sets Are there repositories of automatically generated (spam) webpages? Minimum number of givens for General Sudoku of size $n^2 \times n^2$ Is there any efficient algorithm for primality testing for numbers that are of the form $4k+3$ using the square root function? Is there an alternative to full factorization for testing the Polya conjecture? Is there an algorithm that can solve chess within the span of a human lifetime? What algorithms are there for scheduling tasks
CommonCrawl
Matching Items (10) Galam's Voting Systems and Public Debate Models Revisited Serge Galams voting systems and public debate models are used to model voting behaviors of two competing opinions in democratic societies. Galam assumes that individuals in the population are independently in favor of one opinion with a fixed probability p,… Serge Galams voting systems and public debate models are used to model voting behaviors of two competing opinions in democratic societies. Galam assumes that individuals in the population are independently in favor of one opinion with a fixed probability p, making the initial number of that type of opinion a binomial random variable. This analysis revisits Galams models from the point of view of the hypergeometric random variable by assuming the initial number of individuals in favor of an opinion is a fixed deterministic number. This assumption is more realistic, especially when analyzing small populations. Evolution of the models is based on majority rules, with a bias introduced when there is a tie. For the hier- archical voting system model, in order to derive the probability that opinion +1 would win, the analysis was done by reversing time and assuming that an individual in favor of opinion +1 wins. Then, working backwards we counted the number of configurations at the next lowest level that could induce each possible configuration at the level above, and continued this process until reaching the bottom level, i.e., the initial population. Using this method, we were able to derive an explicit formula for the probability that an individual in favor of opinion +1 wins given any initial count of that opinion, for any group size greater than or equal to three. For the public debate model, we counted the total number of individuals in favor of opinion +1 at each time step and used this variable to define a random walk. Then, we used first-step analysis to derive an explicit formula for the probability that an individual in favor of opinion +1 wins given any initial count of that opinion for group sizes of three. The spatial public debate model evolves based on the proportional rule. For the spatial model, the most natural graphical representation to construct the process results in a model that is not mathematically tractable. Thus, we defined a different graphical representation that is mathematically equivalent to the first graphical representation, but in this model it is possible to define a dual process that is mathematically tractable. Using this graphical representation we prove clustering in 1D and 2D and coexistence in higher dimensions following the same approach as for the voter model interacting particle system. Taylor, Nicole Robyn (Co-author) Lanchier, Nicolas (Co-author, Thesis director) Smith, Hal (Committee member) Hurlbert, Glenn (Committee member) Design, analysis and resource allocations in networks in presence of region-based faults Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in… Communication networks, both wired and wireless, are expected to have a certain level of fault-tolerance capability.These networks are also expected to ensure a graceful degradation in performance when some of the network components fail. Traditional studies on fault tolerance in communication networks, for the most part, make no assumptions regarding the location of node/link faults, i.e., the faulty nodes and links may be close to each other or far from each other. However, in many real life scenarios, there exists a strong spatial correlation among the faulty nodes and links. Such failures are often encountered in disaster situations, e.g., natural calamities or enemy attacks. In presence of such region-based faults, many of traditional network analysis and fault-tolerant metrics, that are valid under non-spatially correlated faults, are no longer applicable. To this effect, the main thrust of this research is design and analysis of robust networks in presence of such region-based faults. One important finding of this research is that if some prior knowledge is available on the maximum size of the region that might be affected due to a region-based fault, this piece of knowledge can be effectively utilized for resource efficient design of networks. It has been shown in this dissertation that in some scenarios, effective utilization of this knowledge may result in substantial saving is transmission power in wireless networks. In this dissertation, the impact of region-based faults on the connectivity of wireless networks has been studied and a new metric, region-based connectivity, is proposed to measure the fault-tolerance capability of a network. In addition, novel metrics, such as the region-based component decomposition number(RBCDN) and region-based largest component size(RBLCS) have been proposed to capture the network state, when a region-based fault disconnects the network. Finally, this dissertation presents efficient resource allocation techniques that ensure tolerance against region-based faults, in distributed file storage networks and data center networks. Banerjee, Sujogya (Author) Sen, Arunabha (Thesis advisor) Xue, Guoliang (Committee member) Richa, Andrea (Committee member) On tiling directed graphs with cycles and tournaments A tiling is a collection of vertex disjoint subgraphs called tiles. If the tiles are all isomorphic to a graph $H$ then the tiling is an $H$-tiling. If a graph $G$ has an $H$-tiling which covers all of the vertices… A tiling is a collection of vertex disjoint subgraphs called tiles. If the tiles are all isomorphic to a graph $H$ then the tiling is an $H$-tiling. If a graph $G$ has an $H$-tiling which covers all of the vertices of $G$ then the $H$-tiling is a perfect $H$-tiling or an $H$-factor. A goal of this study is to extend theorems on sufficient minimum degree conditions for perfect tilings in graphs to directed graphs. Corrádi and Hajnal proved that every graph $G$ on $3k$ vertices with minimum degree $delta(G)ge2k$ has a $K_3$-factor, where $K_s$ is the complete graph on $s$ vertices. The following theorem extends this result to directed graphs: If $D$ is a directed graph on $3k$ vertices with minimum total degree $delta(D)ge4k-1$ then $D$ can be partitioned into $k$ parts each of size $3$ so that all of parts contain a transitive triangle and $k-1$ of the parts also contain a cyclic triangle. The total degree of a vertex $v$ is the sum of $d^-(v)$ the in-degree and $d^+(v)$ the out-degree of $v$. Note that both orientations of $C_3$ are considered: the transitive triangle and the cyclic triangle. The theorem is best possible in that there are digraphs that meet the minimum degree requirement but have no cyclic triangle factor. The possibility of added a connectivity requirement to ensure a cycle triangle factor is also explored. Hajnal and Szemerédi proved that if $G$ is a graph on $sk$ vertices and $delta(G)ge(s-1)k$ then $G$ contains a $K_s$-factor. As a possible extension of this celebrated theorem to directed graphs it is proved that if $D$ is a directed graph on $sk$ vertices with $delta(D)ge2(s-1)k-1$ then $D$ contains $k$ disjoint transitive tournaments on $s$ vertices. We also discuss tiling directed graph with other tournaments. This study also explores minimum total degree conditions for perfect directed cycle tilings and sufficient semi-degree conditions for a directed graph to contain an anti-directed Hamilton cycle. The semi-degree of a vertex $v$ is $min{d^+(v), d^-(v)}$ and an anti-directed Hamilton cycle is a spanning cycle in which no pair of consecutive edges form a directed path. Molla, Theodore (Author) Kierstead, Henry A (Thesis advisor) Czygrinow, Andrzej (Committee member) Fishel, Susanna (Committee member) Spielberg, Jack (Committee member) Optimal degree conditions for spanning subgraphs In a large network (graph) it would be desirable to guarantee the existence of some local property based only on global knowledge of the network. Consider the following classical example: how many connections are necessary to guarantee that the network… In a large network (graph) it would be desirable to guarantee the existence of some local property based only on global knowledge of the network. Consider the following classical example: how many connections are necessary to guarantee that the network contains three nodes which are pairwise adjacent? It turns out that more than n^2/4 connections are needed, and no smaller number will suffice in general. Problems of this type fall into the category of ``extremal graph theory.'' Generally speaking, extremal graph theory is the study of how global parameters of a graph are related to local properties. This dissertation deals with the relationship between minimum degree conditions of a host graph G and the property that G contains a specified spanning subgraph (or class of subgraphs). The goal is to find the optimal minimum degree which guarantees the existence of a desired spanning subgraph. This goal is achieved in four different settings, with the main tools being Szemeredi's Regularity Lemma; the Blow-up Lemma of Komlos, Sarkozy, and Szemeredi; and some basic probabilistic techniques. DeBiasio, Louis (Author) Czygrinow, Andrzej (Thesis advisor) Kadell, Kevin (Committee member) Monotonicity and manipulability of ordinal and cardinal social choice functions Borda's social choice method and Condorcet's social choice method are shown to satisfy different monotonicities and it is shown that it is impossible for any social choice method to satisfy them both. Results of a Monte Carlo simulation are presented… Borda's social choice method and Condorcet's social choice method are shown to satisfy different monotonicities and it is shown that it is impossible for any social choice method to satisfy them both. Results of a Monte Carlo simulation are presented which estimate the probability of each of the following social choice methods being manipulable: plurality (first past the post), Borda count, instant runoff, Kemeny-Young, Schulze, and majority Borda. The Kemeny-Young and Schulze methods exhibit the strongest resistance to random manipulability. Two variations of the majority judgment method, with different tie-breaking rules, are compared for continuity. A new variation is proposed which minimizes discontinuity. A framework for social choice methods based on grades is presented. It is based on the Balinski-Laraki framework, but doesn't require aggregation functions to be strictly monotone. By relaxing this restriction, strategy-proof aggregation functions can better handle a polarized electorate, can give a societal grade closer to the input grades, and can partially avoid certain voting paradoxes. A new cardinal voting method, called the linear median is presented, and is shown to have several very valuable properties. Range voting, the majority judgment, and the linear median are also simulated to compare their manipulability against that of the ordinal methods. Jennings, Andrew (Author) Hurlbert, Glenn (Thesis advisor) Barcelo, Helene (Thesis advisor) Balinski, Michel (Committee member) Laraki, Rida (Committee member) Jones, Don (Committee member) Erdős-Ko-Rado theorems: new generalizations, stability analysis and Chvátal's Conjecture The primary focus of this dissertation lies in extremal combinatorics, in particular intersection theorems in finite set theory. A seminal result in the area is the theorem of Erdos, Ko and Rado which finds the upper bound on the size… The primary focus of this dissertation lies in extremal combinatorics, in particular intersection theorems in finite set theory. A seminal result in the area is the theorem of Erdos, Ko and Rado which finds the upper bound on the size of an intersecting family of subsets of an n-element set and characterizes the structure of families which attain this upper bound. A major portion of this dissertation focuses on a recent generalization of the Erdos--Ko--Rado theorem which considers intersecting families of independent sets in graphs. An intersection theorem is proved for a large class of graphs, namely chordal graphs which satisfy an additional condition and similar problems are considered for trees, bipartite graphs and other special classes. A similar extension is also formulated for cross-intersecting families and results are proved for chordal graphs and cycles. A well-known generalization of the EKR theorem for k-wise intersecting families due to Frankl is also considered. A stability version of Frankl's theorem is proved, which provides additional structural information about k-wise intersecting families which have size close to the maximum upper bound. A graph-theoretic generalization of Frankl's theorem is also formulated and proved for perfect matching graphs. Finally, a long-standing conjecture of Chvatal regarding structure of maximum intersecting families in hereditary systems is considered. An intersection theorem is proved for hereditary families which have rank 3 using a powerful tool of Erdos and Rado which is called the Sunflower Lemma. Kamat, Vikram M (Author) Colbourn, Charles (Committee member) Kierstead, Henry (Committee member) Pebbling in Split Graphs Graph pebbling is a network optimization model for transporting discrete resources that are consumed in transit: the movement of 2 pebbles across an edge consumes one of the pebbles. The pebbling number of a graph is the fewest number of… Graph pebbling is a network optimization model for transporting discrete resources that are consumed in transit: the movement of 2 pebbles across an edge consumes one of the pebbles. The pebbling number of a graph is the fewest number of pebbles t so that, from any initial configuration of t pebbles on its vertices, one can place a pebble on any given target vertex via such pebbling steps. It is known that deciding whether a given configuration on a particular graph can reach a specified target is NP-complete, even for diameter 2 graphs, and that deciding whether the pebbling number has a prescribed upper bound is Π[P over 2]-complete. On the other hand, for many families of graphs there are formulas or polynomial algorithms for computing pebbling numbers; for example, complete graphs, products of paths (including cubes), trees, cycles, diameter 2 graphs, and more. Moreover, graphs having minimum pebbling number are called Class 0, and many authors have studied which graphs are Class 0 and what graph properties guarantee it, with no characterization in sight. In this paper we investigate an important family of diameter 3 chordal graphs called split graphs; graphs whose vertex set can be partitioned into a clique and an independent set. We provide a formula for the pebbling number of a split graph, along with an algorithm for calculating it that runs in O(n[superscript β]) time, where β = 2ω/(ω + 1) [= over ∼] 1.41 and ω [= over ∼] 2.376 is the exponent of matrix multiplication. Furthermore we determine that all split graphs with minimum degree at least 3 are Class 0. Alcon, Liliana (Author) Gutierrez, Marisa (Author) Hurlbert, Glenn (Author) College of Liberal Arts and Sciences (Contributor) A fun way to help students discover discrete mathematics This thesis focuses on sequencing questions in a way that provides students with manageable steps to understand some of the fundamental concepts in discrete mathematics. The questions are aimed at younger students (middle and high school aged) with the goal… This thesis focuses on sequencing questions in a way that provides students with manageable steps to understand some of the fundamental concepts in discrete mathematics. The questions are aimed at younger students (middle and high school aged) with the goal of helping young students, who have likely never seen discrete mathematics, to learn through guided discovery. Chapter 2 is the bulk of this thesis as it provides questions, hints, solutions, as well as a brief discussion of each question. In the discussions following the questions, I have attempted to illustrate some relationships between the current question and previous questions, explain the learning goals of that question, as well as point out possible flaws in students' thinking or point out ways to explore this topic further. Chapter 3 provides additional questions with hints and solutions, but no discussion. Many of the questions in Chapter 3 contain ideas similar to questions in Chapter 2, but also illustrate how versatile discrete mathematics topics are. Chapter 4 focuses on possible future directions. The overall framework for the questions is that a student is hosting a birthday party, and all of the questions are ones that might actually come up in party planning. The purpose of putting it in this setting is to make the questions seem more coherent and less arbitrary or forced. Bell, Stephanie (Author) Fishel, Susana (Thesis advisor) Quigg, John (Committee member) Coloring graphs from almost maximum degree sized palettes Every graph can be colored with one more color than its maximum degree. A well-known theorem of Brooks gives the precise conditions under which a graph can be colored with maximum degree colors. It is natural to ask for the… Every graph can be colored with one more color than its maximum degree. A well-known theorem of Brooks gives the precise conditions under which a graph can be colored with maximum degree colors. It is natural to ask for the required conditions on a graph to color with one less color than the maximum degree; in 1977 Borodin and Kostochka conjectured a solution for graphs with maximum degree at least 9: as long as the graph doesn't contain a maximum-degree-sized clique, it can be colored with one fewer than the maximum degree colors. This study attacks the conjecture on multiple fronts. The first technique is an extension of a vertex shuffling procedure of Catlin and is used to prove the conjecture for graphs with edgeless high vertex subgraphs. This general approach also bears more theoretical fruit. The second technique is an extension of a method Kostochka used to reduce the Borodin-Kostochka conjecture to the maximum degree 9 case. Results on the existence of independent transversals are used to find an independent set intersecting every maximum clique in a graph. The third technique uses list coloring results to exclude induced subgraphs in a counterexample to the conjecture. The classification of such excludable graphs that decompose as the join of two graphs is the backbone of many of the results presented here. The fourth technique uses the structure theorem for quasi-line graphs of Chudnovsky and Seymour in concert with the third technique to prove the Borodin-Kostochka conjecture for claw-free graphs. The fifth technique adds edges to proper induced subgraphs of a minimum counterexample to gain control over the colorings produced by minimality. The sixth technique adapts a recoloring technique originally developed for strong coloring by Haxell and by Aharoni, Berger and Ziv to general coloring. Using this recoloring technique, the Borodin-Kostochka conjectured is proved for graphs where every vertex is in a large clique. The final technique is naive probabilistic coloring as employed by Reed in the proof of the Borodin-Kostochka conjecture for large maximum degree. The technique is adapted to prove the Borodin-Kostochka conjecture for list coloring for large maximum degree. Rabern, Landon (Author) Kierstead, Henry (Thesis advisor) On-line coloring of partial orders, circular arc graphs, and trees A central concept of combinatorics is partitioning structures with given constraints. Partitions of on-line posets and on-line graphs, which are dynamic versions of the more familiar static structures posets and graphs, are examined. In the on-line setting, vertices are continually… A central concept of combinatorics is partitioning structures with given constraints. Partitions of on-line posets and on-line graphs, which are dynamic versions of the more familiar static structures posets and graphs, are examined. In the on-line setting, vertices are continually added to a poset or graph while a chain partition or coloring (respectively) is maintained. %The optima of the static cases cannot be achieved in the on-line setting. Both upper and lower bounds for the optimum of the number of chains needed to partition a width $w$ on-line poset exist. Kierstead's upper bound of $\frac{5^w-1}{4}$ was improved to $w^{14 \lg w}$ by Bosek and Krawczyk. This is improved to $w^{3+6.5 \lg w}$ by employing the First-Fit algorithm on a family of restricted posets (expanding on the work of Bosek and Krawczyk) . Namely, the family of ladder-free posets where the $m$-ladder is the transitive closure of the union of two incomparable chains $x_1\le\dots\le x_m$, $y_1\le\dots\le y_m$ and the set of comparabilities $\{x_1\le y_1,\dots, x_m\le y_m\}$. No upper bound on the number of colors needed to color a general on-line graph exists. To lay this fact plain, the performance of on-line coloring of trees is shown to be particularly problematic. There are trees that require $n$ colors to color on-line for any positive integer $n$. Furthermore, there are trees that usually require many colors to color on-line even if they are presented without any particular strategy. For restricted families of graphs, upper and lower bounds for the optimum number of colors needed to maintain an on-line coloring exist. In particular, circular arc graphs can be colored on-line using less than 8 times the optimum number from the static case. This follows from the work of Pemmaraju, Raman, and Varadarajan in on-line coloring of interval graphs. Smith, Matthew Earl (Author) ASU Scholarship Showcase 1 Barrett, The Honors College Thesis/Creative Project Collection 1 Faculty and Staff 1 ASU Library. Music Library 11,444 Arizona State University 10,333 Barrett, The Honors College 8,283 Bach, Johann Sebastian, 1685-1750 5,378 Brahms, Johannes, 1833-1897 3,034 Beethoven, Ludwig van, 1770-1827 2,519 Schumann, Robert, 1810-1856 2,406 Mozart, Wolfgang Amadeus, 1756-1791 2,387 Debussy, Claude, 1862-1918 1,707 Schubert, Franz, 1797-1828 1,581 Poulenc, Francis, 1899-1963 1,268 Chopin, Frédéric, 1810-1849 1,127 School of Life Sciences 1,097 College of Liberal Arts and Sciences 976 Hindemith, Paul, 1895-1963 967 Haydn, Joseph, 1732-1809 959 Ravel, Maurice, 1875-1937 956 Prokofiev, Sergey, 1891-1953 949 Britten, Benjamin, 1913-1976 810 Handel, George Frideric, 1685-1759 804 Graph Theory 3 Extremal problems (Mathematics) 2 Analysis 1 Combinatorics 1 Directed graphs 1 Discrete Math 1 first-fit 1 Graph coloring 1 Interacting Particle Systems 1 Linear Median 1 Mathematics Education 1 Monotonicity 1 Probability 1 Voting 1 Voting Systems 1 Masters Thesis 1
CommonCrawl
Journal Articles Cell Death and Disease Year : 2021 Intranasal administration of $\alpha$-synuclein preformed fibrils triggers microglial iron deposition in the substantia nigra of Macaca fascicularis Jian-Jun Guo Feng Yue Dong-Yan Song Luc Bousset Xin Liang Jing Tang Lin Yuan Wen Li Ronald Melki Yong Tang Piu Chan Chuang Guo Jia-Yi Li Northeastern University [Shenyang] Peking University [Beijing] CNRS - Centre National de la Recherche Scientifique Shenzhen Univerisity [Shenzhen] MIRCen - Molecular Imaging Research Center [Fontenay-aux-Roses] IdHAL : luc-bousset Centre National de la Recherche Scientifique Molecular Imaging Research Center [Fontenay-aux-Roses] Iron deposition is present in main lesion areas in the brains of patients with Parkinson's disease (PD) and an abnormal iron content may be associated with dopaminergic neuronal cytotoxicity and degeneration in the substantia nigra of the midbrain. However, the cause of iron deposition and its role in the pathological process of PD are unclear. In the present study, we investigated the effects of the nasal mucosal delivery of synthetic human α-synuclein (α-syn) preformed fibrils (PFFs) on the pathogenesis of PD in Macaca fascicularis. We detected that iron deposition was clearly increased in a time-dependent manner from 1 to 17 months in the substantia nigra and globus pallidus, highly contrasting to other brain regions after treatments with α-syn PFFs. At the cellular level, the iron deposits were specifically localized in microglia but not in dopaminergic neurons, nor in other types of glial cells in the substantia nigra, whereas the expression of transferrin (TF), TF receptor 1 (TFR1), TF receptor 2 (TFR2), and ferroportin (FPn) was increased in dopaminergic neurons. Furthermore, no clear dopaminergic neuron loss was observed in the substantia nigra, but with decreased immunoreactivity of tyrosine hydroxylase (TH) and appearance of axonal swelling in the putamen. The brain region-enriched and cell-type-dependent iron localizations indicate that the intranasal α-syn PFFs treatment-induced iron depositions in microglia in the substantia nigra may appear as an early cellular response that may initiate neuroinflammation in the dopaminergic system before cell death occurs. Our data suggest that the inhibition of iron deposition may be a potential approach for the early prevention and treatment of PD. 1234567890(): s41419-020-03369-x.pdf (4.45 Mo) Télécharger le fichier Ronald Melki : Connect in order to contact the contributor Submitted on : Wednesday, January 20, 2021-9:18:43 AM Long-term archiving on: Wednesday, April 21, 2021-6:11:18 PM DOI : 10.1038/s41419-020-03369-x Jian-Jun Guo, Feng Yue, Dong-Yan Song, Luc Bousset, Xin Liang, et al.. Intranasal administration of $\alpha$-synuclein preformed fibrils triggers microglial iron deposition in the substantia nigra of Macaca fascicularis. Cell Death and Disease, 2021, 12, pp.1-14. ⟨10.1038/s41419-020-03369-x⟩. ⟨cea-03116047⟩ CEA CNRS CEA-UPSAY UNIV-PARIS-SACLAY
CommonCrawl
What do "function of" and "differentiate with respect to" mean? Asked 14 days ago In maths and sciences, I see the phrases "function of" and "with respect to" used quite a lot. For example, one might say that $f$ is a function of $x$, and then differentiate $f$ "with respect to $x$". I am familiar with the definition of a function and of the derivative, but it's really not clear to me what a function of something is, or why we need to say "with respect to". I find all this a bit confusing, and it makes it hard for me to follow arguments sometimes. In my research, I've found this, but the answers here aren't quite what I'm looking for. The answers seemed to discuss either what a function is, but I know what a function is. I am also unsatisfied with the suggestion that $f$ is a function of $x$ if we just label its argument as $x$, since labels are arbitrary. I could write $f(x)$ for some value in the domain of $f$, but couldn't I equally well write $f(t)$ or $f(w)$ instead? To illustrate my confusion with a concrete example: consider the cumulative amount of wax burnt, $w$ as a candle burns. In a simple picture, we could say that $w$ depends on the amount of time for which the candle has been burning, and so we might say something like "$w$ is a function of time". In this simple picture, $w$ is a function of a single real variable. My confusion is, why do we actually say that $w$ is a function of time? Surely $w$ is just a function on some subset of the real numbers (depending specifically on how we chose to define $w$), rather than a function of time? Sure, $w$ only has the interpretation we think it does (cumulative amount of wax burnt) when we provide a time as its argument, but why does that mean it is a function of time? There's nothing stopping me from putting any old argument (provided $w$ is defined at that point) in to $w$, like the distance I have walked since the candle was lit. Sure, we can't really interpret $w$ in the same way if I did this, but there is nothing in the definition of $w$ which stops me from doing this. Also, what happens when I do some differentiation on $w$. If I differentiate $w$ "with respect to time", then I'd get the time rate at which the candle is burning. If I differentiate $w$ "with respect to" the distance I have walked since the candle was lit, I'd expect to get either zero (since $w$ is not a function of this), or something more complicated (since the distance I have walked is related to time). I just can't see mathematically what is happening here: ultimately, no matter what we're calling our variables, $w$ is a function of a single variable, not of multiple, and so shouldn't there be absolutely no ambiguity in how to differentiate $w$? Shouldn't there just be "the derivative of w", found by differentiating $w$ with respect to its argument (writing "with respect to its argument" is redundant!). Can anyone help clarify what we mean by "function of" as opposed to function, and how this is important when we differentiate functions "with respect to" something? Thanks! calculus derivatives terminology Martin Sleziak DeesideDeeside $\begingroup$ Functions may depend on several variables, in which case it is important to clarify which one you intend to vary. If the context makes it clear which variable is intended, then people often just speak of "the derivative" of a function, without bothering to mention the (understood) variable. Beyond that...to say that one variable $y$ is "a function of another, $x$" just means that changes in $x$ are expected to produce changes in $y$. Does that help? $\endgroup$ – lulu Jan 5 at 23:58 $\begingroup$ I'm not sure whether you are asking about functions in the abstract or functions in a scientific sense. If you mean the latter, then of course some observed variable might be expected to depend on other variables. $\endgroup$ – lulu Jan 5 at 23:59 $\begingroup$ @lulu I would still be a bit confused by that. Suppose now that $w$ in the question depends on time, and altitude, so $w$ is a function of two real variables. Let the first argument of $w$ be time, and the second argument altitude. All my confusion about $w$ is still there, though. If $a$ is a time and $b$ is an altitude, then $w(a, b)$ has the meaning we want it to. However, we can still consider $w(c, d)$, where $c$ is a value of distance I've walked, and $d$ is the pressure in the room of the candle. My confusion about single variables still is there for multiple. $\endgroup$ – Deeside Jan 6 at 0:04 $\begingroup$ So what's confusing me is why we care about what the labels are. I understood that when we write $f(x) = x^2$, we're saying something along the lines of "$f$ a function which squares its argument", and that $x$ doesn't really 'exist', so to speak, outside of the definition of $f$. Since I thought we think of functions as independent objects of what we've called their variables, why don't we have $f(t) = t^2$? And why does it matter what we call some $x$ outside of the definition of $f$? $\endgroup$ – Deeside Jan 6 at 0:11 $\begingroup$ But since the variable names are arbitrary, why should we say $G$ is a function of $x$ and $y$, or $a$ and $b$, etc. Why is that useful? In your population example, I get how the population can change with all the actual variables, but I'm really struggling to see how we actually formalise this in maths as an actual mathematical function, since the variable names are arbitrary. $\endgroup$ – Deeside Jan 6 at 0:35 As a student of math and physics, this has been one of the biggest annoyances for me; I'll give my two cents on the matter. Throughout my entire answer, whenever I use the term "function", it will always mean in the usual math sense (a rule with a certain domain and codomain blablabla). I generally find two ways in which people use the phrase "... is a function of ..." The first is as you say: "$f$ is a function of $x$" simply means that for the remainder of the discussion, we shall agree to denote the input of the function $f$ by the letter $x$. This is just a notational choice as you say, so there's no real math going on. We just make this choice of notation to in a sense "standardize everything". Of course, we usually allow for variants on the letter $x$. So, we may write things like $f(x), f(x_0), f(x_1), f(x'), f(\tilde{x}), f(\bar{x})$ etc. The way to interpret this is as usual: this is just the result obtained by evaluating the function $f$ on a specific element of its domain. Also, you're right that the input label is completely arbitrary, so we can say $f(t), f(y), f(\ddot{\smile})$ whatever else we like. But again, often times it might just be convenient to use certain letters for certain purposes (this can allow for easier reading, and also reduce notational conflicts); and as much as possible it is a good idea to conform to the widely used notation, because at the end of the day, math is about communicating ideas, and one must find a balance between absolute precision and rigour and clarity/flow of thought. btw as a side remark, I think I am a very very very nitpicky individual regarding issues like: $f$ vs $f(x)$ for a function, I'm also always careful to use my quantifiers properly etc. However, there have been a few textbooks I glossed over, which are also extremely picky and explicit and precise about everything; but while what they wrote was $100 \%$ correct, it was difficult to read (I had to pause often etc). This is as opposed to some other books/papers which leave certain issues implicit, but convey ideas more clearly. This is what I meant above regarding balance between precision and flow of thought. Now, back to the issue at hand. In your third and fourth paragraphs, I think you have made a couple of true statements, but you're missing the point. (one of) the job(s) of any scientist is to quantitatively describe and explain observations made in real life. For example, you introduced the example of the amount of wax burnt, $w$. If all you wish to do is study properties of functions which map $\Bbb{R} \to \Bbb{R}$ (or subsets thereof), then there is clearly no point in calling $w$ the wax burnt or whatever. But given that you have $w$ as the amount of wax burnt, the most naive model for describing how this changes is to assume that the flame which is burning the wax is kept constant and all other variables are kept constant etc. Then, clearly the amount of wax burnt will only depend on the time elapsed. From the moment you start your measurement/experiment process, at each time $t$, there will be a certain amount of wax burnt off, $w(t)$. In other words, we have a function $w: [0, \tau] \to \Bbb{R}$, where the physical interpretation is that for each $t \in [0, \tau]$, $w(t)$ is the amount of wax burnt off $t$ units of time after starting the process. Let's for the sake of definiteness say that $w(t) = t^3$ (with the above domain and codomain). "Sure, $w$ only has the interpretation we think it does (cumulative amount of wax burnt) when we provide a (real number in the domain of definition, which we interpret as) time as its argument" "...Sure, we can't really interpret $w$ in the same way if I did this, but there is nothing in the definition of w which stops me from doing this." Also true. But here's where you're missing the point. If you didn't want to give a physical interpretation of what elements in the domain and target space of $w$ mean, why would you even talk about the example of burning wax? Why not just tell me the following: Fix a number $\tau > 0$, and define $w: [0, \tau] \to \Bbb{R}$ by $w(t) = t^3$. This is a perfectly self-contained mathematical statement. And now, I can tell you a bunch of properties of $w$. Such as: $w$ is an increasing function For all $t \in [0, \tau]$, $w'(t) = 3t^2$ (derivatives at end points of course are interpreted as one-sided limits) $w$ has exactly one root (of multiplicity $3$) on this interval of definition. (and many more other properties). So, if you want to completely forget about the physical context, and just focus on the function and its properties, then of course you can do so. Sometimes, such an abstraction is very useful as it removes any "clutter". However, I really don't think it is (always) a good idea to completely disconnect mathematical ideas from their physical origins/interpretations. And the reason that in the sciences people often assign such interpretations is because their purpose is to use the powerful tool of mathematics to quantitatively model an actual physical observation. So, while you have made a few technically true statements in your third and fourth paragraphs, I believe you've missed the point of why people assign physical meaning to certain quantities. For your fifth paragraph however, I agree with the sentiment you're describing, and questions like this have tortured me. You're right that $w$ is a function of a single variable (where in this physical context, we interpret the arguments as time). If you now ask me how does $w$ change in relation to the distance I have started to walk, then I completely agree that there is no relation whatsoever. But what is really going on is a terrible, annoying, confusing abuse of notation, where we use the same letter $w$ to have two differnent meanings. Physicists love such abuse of notation, and this has confused me for so long (and it still does from time to time). Of course, the intuitive idea of why the amount of wax burnt should depend on distance is clear: the further I walk, the more time has passed, and hence the more max has burnt. So, this is really a two step process. To formalize this, we need to introduce a second function $\gamma$ (between certain subsets of $\Bbb{R}$), where the interpretation is that $\gamma(x)$ is the time taken to walk a distance $x$. Then when we (by abuse of language) say $w$ is a function of distance, what we really mean is that The composite function $w \circ \gamma$ has the physical interpretation that for each $x \in \text{domain}(\gamma)$, $(w \circ \gamma)(x)$ is the amount of wax burnt when I walk a distance $x$. Very often, this composition is not made explicit. In the Leibniz chain rule notation \begin{align} \dfrac{dw}{dx} &= \dfrac{dw}{dt} \dfrac{dt}{dx} \end{align} Where on the LHS $w$ is miraculously a function of distance, even though on the LHS (and initially) $w$ was a function of time, what is really going on is that the $w$ on the LHS is a complete abuse of notation. And of course, the precise way of writing it is $(w \circ \gamma)'(x) = w'(\gamma(x)) \cdot \gamma'(x)$. In general, whenever you initially have a function $f$ "as a function of $x$" and then suddenly it becomes a "function of $t$", what is really meant is that we are given two functions $f$ and $\gamma$; and when we say "consider $f$ as a function of $x$", we really mean to just consider the function $f$, but when we say "consider $f$ as a function of time", we really mean to consider the (completely different) function $f \circ \gamma$. Summary: if the arugments of a function suddenly change interpretations (eg from time to distance or really anything else) then you immediately know that the author is being sloppy/lazy in explicitly mentioning that there is a hidden composition. answered Jan 6 at 2:15 peek-a-boopeek-a-boo Excellent question. There are already good answers, I'll try to make a few, concise points. Be nice to your readers You should try to be nice to people reading and using your definitions, including your future self. It means that you should stick to conventions when possible. Variable names imply domain and codomain If you write that "$f$ is a function of $x$", readers will assume that it means that $f:\mathbb{R}\rightarrow\mathbb{R}$. Similarly, if you write $f(z)$ it will imply that $f:\mathbb{C}\rightarrow\mathbb{C}$, and $f(n)$ might be for $f:\mathbb{N}\rightarrow\mathbb{Z}$. It wouldn't be wrong to define $f:\mathbb{C}\rightarrow\mathbb{C}$ as $f(n)= \frac{in+1}{\overline{n}-i}$ but it would be surprising and might lead to incorrect assumptions (e.g. $\overline{n} = n$). Free and bound variables You might be interested in knowing the distinction between free and bound variables. $$\sum_{k=1}^{10} f(k, n)$$ $n$ is a free variable and $k$ is a bound variable; consequently the value of this expression depends on the value of n, but there is nothing called $k$ on which it could depend. Here's a related answer on StackOverflow. "All models are wrong, some are useful", George Box Your simplified amount of wax burnt as a function of time is probably wrong (it cannot perfectly know or describe the status of every atom) but it might at least be useful. The amount of wax burnt as a function of "the distance you have walked since the candle was lit" will be even less correct and much less useful. Physical variable names have meaning Physical variable names are not just placeholders. They are linked to physical quantities and units. Replacing $l$ by $t$ as a variable name for a function will not just be surprising to readers, it will break dimensional homogeneity. answered Jan 6 at 10:37 Eric DuminilEric Duminil Sometimes, especially in physical contexts, the view is not of functions acting on arguments but rather of constraints acting on variables. The simplest example is that maybe we have variables $w$ and $t$ representing the length of wax burned and the duration since the candle was lit respectively, and we observe the following relation: $$w=\left(1\,\frac{\text{meter}}{\text{second}}\right)\cdot t$$ You can imagine this as the implicit definition of a curve in a $w$-$t$ plane. It's legal to take "the derivative" of both sides to get: $$dw=\left(1\,\frac{\text{meter}}{\text{second}}\right) \cdot dt$$ where the items on either side are formally known as differential forms. Here, you can't just swap out variables because $w$ was not defined as a function - it is related to some other quantity in a fixed way! One can read this equation as saying that, no matter how we change the state, over a small enough amount of change, the amount of candle burned is proportional to the duration passed as long as this equation holds. A somewhat more practical idea of this is to consider what would happen if we wanted to represent a point on the circle. We know that a point $(x,y)$ is only a valid state if $$x^2+y^2=1$$ and we can take the derivative of both sides to get $$2x\,dx+2y\,dy=0$$ or, simplifying $$x\,dx + y\,dy = 0$$ which essentially reads that, no matter how this system moves or what laws might dictate how $x$ and $y$ vary through time or any other parameter, for small changes, the sum of each coordinate times its instantaneous rate of change must be zero. We could also rearrange to $dx=\frac{-y}x\,dy$ which clarifies that the derivative of $x$ with respect to $y$ is $\frac{-y}x$, meaning that the changes $dx$ and $dy$ in these variables are proportional by this constant. Note that we can also add more information freely; suppose that $x$ is actually varying in time and is given as $x=t^2$. Then $dx=2t\,dt$. We could substitute this in to the prior formula to find out that $$x\cdot(2t\,dt) + y\,dy = 2t^3\,dt+y\,dy = 0$$ in a perfectly rigorous fashion. Then, we can see that the derivative of $y$ with respect to $t$ is $\frac{-2t^3}y$ by rearranging to get $dy$ as the product of $dt$ by that expression. Notice how the variables are integral to this point of view: "the derivative of $x$" is perhaps an acceptable way to refer to $dx$, but that symbol tells you nothing; the idea of "derivative of $x$ with respect to $y$" tells you a meaningful relationship between $dx$ and $dy$ - which are objects in their own right (differential forms), rather than evaluations of $f'$ for some function $f$. This is actually a rather convenient way to do calculus - for instance, the fact that you can substitute in for anything (including $dx$) replaces both the chain rule and the formulas for integration by substitution, which makes calculus feel more like algebra. Okay, but how does this relate to the idea of "function of" and "differentiate with respect to"? Well, whenever we have some expression of the form $$da=k\cdot db$$ where $a$ and $b$ and $k$ are variables, we might write that $k=\frac{da}{db}$ (which is an abuse of notation, not literal division - you cannot divide differential forms!) is the derivative of $a$ with respect to $b$ since it's the constant of proportionality relating the change of those variables. Similarly, expressions of the form $$a=f(b)$$ can often be read as saying that $a$ is a function of $b$ - in the very literal since where "is" means "equals" and "a function" refers to $f$ and "of" refers to function application. These are still variables, but there's a function involved now, and we do indeed have $$da= f'(b)\,db$$ where $f'$ is the derivative of the (abstract) function $f$. Of course, if you consider $f$ as a function whose domain is the set of durations and whose codomain is the set of lengths, you will find that $f'$ carries units of speed by definition of the derivative - so there is still some concrete information in $f$, even if we could take some other duration $c$ and write $f(c)$ (though we wouldn't know that this was equal to anything of interest). Sometimes we even say $a$ is a function of $b$ if a relation like $a=f(b)$ just holds on some section of the space of states (e.g. if the coordinates are just restricted to be on some circle, where no relation like this holds globally). Unless you are working in a single dimensional space of states (as is the case for a circle or a line in the earlier examples), the derivative of one variable with respect to another needn't exist - which also indicates another meaning of "differentiate with respect to". For instance, suppose we wanted to consider a sphere: $$x^2+y^2+z^2=0$$ We can differentiate and rearrange to get that if $x\neq 0$ then $$dx = \frac{-y}{x}\,dy + \frac{-z}x\,dz$$ If we agree that $y$ and $z$ are the canonical coordinates, then the coefficients $\frac{-y}x$ and $\frac{-z}x$ are the derivatives of $x$ with respect to $y$ and $z$ respectively. This can also be thought of as a two step process where we look at the sets of states where the $z$ coordinate is fixed (which is then one dimensional) and find a coefficient of proportionality between $dx$ and $dy$ - noting that this meaning of the word does depend on the definition of $z$, so you have to actually choose a whole coordinate system to get any well-defined notion of "differentiate with respect to" out of multiple dimensions. In summary, a lot of this terminology arises because there are multiple formal viewpoints on calculus; you are largely writing about the view that calculus studies functions $\mathbb R\rightarrow\mathbb R$, but it is also valid to view calculus as studying variables defined on a space. This latter view better explains terms like "function of" and "derivative with respect to" which refer literally to variables that are not treated as functions. Formal disclaimer: Largely, this view is associated to differential geometry where we have some differentiable manifold $M$ (i.e. a set with enough structure that we can do differential calculus on - like a curve or a surface) which represents the set of all possible states of a system (e.g. all the points on a circle or all the states that a burning candle passes through) and then each "variable" is a function $M\rightarrow\mathbb R$ that reads off some quality of that state (e.g. the $x$ coordinate or the amount of wax burned). Note that this is somewhat backwards from the functional view, since there is no separation between inputs and outputs and no parameterization of the manifold $M$ implied - and since one can work purely off of the relationships between these variables. However, note that this largely avoids the "function of what?" problem because our variables, though they are functions, are functions on a very meaningful domain: the set of legal states of a system - and while you might be able to parameterize these states by real numbers, these states needn't be thought of as real numbers. Even better is that we don't have to think of the codomain of variables as being $\mathbb R$ - for instance $w$ could be a map from $M$ to the space of lengths and $t$ could be a map to the space of durations, which can both be parameterized by real numbers, but inherently have units and are therefore not naturally equal to the real numbers. So, as is surprisingly common in mathematics, we have really just taken a function and said "we're going to call it a variable and use the notation we'd use for a real number", but everything works out like you'd expect, so it's okay. The point of view basically boils down to "we need to define $M$ in order to make this rigorous, but we will never mention it if we don't have to." Formal disclaimer 2: Sometimes this notion is also used in connection with the study of differential algebras, which is fairly different from what is presented here, but it's unlikely that you'd encounter these things unless you were really looking for them, so don't worry about it. Milo BrandtMilo Brandt $\begingroup$ "but everything works out like you'd expect" – except when it doesn't, which is kind of often the case. Planck spectrum maximum is a classical example where this confuses the hell out of everybody first learning it. $\endgroup$ – leftaroundabout Jan 6 at 14:38 $\begingroup$ when you said "a set without enough structure" did you mean "a set with enough structure"? $\endgroup$ – J. W. Tanner Jan 7 at 20:04 $\begingroup$ @J.W.Tanner Yes. $\endgroup$ – Milo Brandt Jan 7 at 20:05 Technically, you cannot consistently say that $f$ is a function (in the modern sense) and yet say that $f$ is a function of $x$. This kind of inconsistency seems to have arisen when some people got sloppy and conflated the older sense of "function" with the modern sense. In the older sense, we say "$y$ is a function of $x$" to mean that "in all situations where $x,y$ are defined, for each possible value of $x$ there is a specific value of $y$". In modern terms, this means "there exists a function $f$ such that $y = f(x)$ for all $x∈D$ where $D$ is the domain of possible values of $x$ under consideration". In the older usage of "function of", a mapping was conceived only to exist between variables; it did not exist by itself. In other words, "function of" was a relation between variables and expressions involving variables. Note that this usage of "variable" is the older sense, not the newer one from modern logic. Also be careful not to confuse variables in this sense with just plain numbers. If $x,y$ are plain real numbers, then we cannot say anything like "$y$ is a function of $x$". The concept of "function of" only makes in relation to variables (literally varying quantities). If $x$ is a real and $f$ is a function on the reals, then $f(x)$ is just another real, not a function, nor a function of anything. But if $x$ is a variable, then $f(x)$ is also a variable and is literally a function of $x$. In the newer sense, we do not use the phrase "function of" because we have come up with the abstract concept of "function" as objects in their own right. In other words, "function" is a type of objects. If we have a function $f : S→T$, then $f$ is a mapping from $S$ to $T$, and not the result of applying that mapping to some object in $S$. Note that the two senses are not incompatible; you just have to use them precisely. To take your example, consider the burning of a candle. Let $h$ be the height of the candle, and $w$ be the amount of wax remaining on the candle. Then $h,w$ are variables and they vary over time. It is thus natural to let $t$ be the variable denoting time. We can validly say that $w$ is a function of $h$, meaning that there is some function $f$ such that $w = f(h)$ for every $h∈[0,H]$, where $H$ is the initial height of the candle. We can also ask for the derivative of $w$ with respect to $h$, denoted by $\frac{dw}{dh}$. In modern terms, you can ask for the derivative of $f$, denoted by $f'$. But here we are asking for the derivative of the expression $w$, and so it is in fact necessary to specify with respect to what variable. Note that the same variable $w$ can also be a (different) function of time $t$. There are many advantages of using a formalization of differentiation that includes Leibniz notation, namely the notation "$\frac{dy}{dx}$" (not a fraction) for derivative of $y$ with respect to $x$. One is that facts like the chain rule can be proven in a natural way without sacrificing rigour. And as an example application to the burning candle above, if $\frac{dw}{dh}$ and $\frac{dh}{dt}$ are defined, then by the chain rule we have $\frac{dw}{dh} · \frac{dh}{dt} = \frac{dw}{dt}$. Another is that we can reason about the gradient of parametric curves even at points where the curve is not locally bijective (see the second example here). A third advantage is that in the physical sciences it is typical to have implicit relations, where we are interested in certain variables and how they vary with respect to one another, even though in an actual experiment those variables vary with time. For example in a titration we may be interested in the point where the pH changes most slowly with respect to titrant amount (see this post for details), even though during the actual titration both pH and titrant amount are varying with time. Conceptually, it is more elegant to treat these as variables rather than one as being the output of a function on the other. This is a partial answer reflecting on a comment of yours under your original post: So what's confusing me is why we care about what the labels are. I understood that when we write $f(x)=x^2$, we're saying something along the lines of "$f$ is a function which squares its argument", and that $x$ doesn't really 'exist', so to speak, outside of the definition of $f$. Since I thought we think of functions as independent objects of what we've called their variables, why don't we have $f(t)=t^2$? And why does it matter what we call some $x$ outside of the definition of $f$? Source: comment of Deeside I totally get your viewpoint. You view functions as objects with two traits: they have a type, e.g. $f\colon \mathbb{R} \to \mathbb{R}$ they allow function application, e.g. $f x$ if $x \in \mathbb{R}$ Hence, as there is absolutely no notion of argument names involved, you cannot just say $\frac{\mathrm{d}f}{\mathrm{d}x}$. Instead, one should say $\frac{\mathrm{d}f}{\mathrm{d}1}$, i.e. that we differentiate wrt. the first argument. Indeed, I've seen some people do this with the notation $\partial_1 f$ or $f_1$. If the function only has one argument, then we can also introduce the notation $f'$ to stand for differentiation wrt. to the obvious and only argument. However, I am not sure if that simplistic viewpoint of "positional differentiation"1 is helpful, say helpful for formalization of math in computer systems. Mathematicians do use "named differentiation"1 as well, so our formalization tools and their underlying logic should support this. I am not sure how current libraries of Coq, Isabelle and others handle named differentiation — if at all. Perhaps someone else can comment on this. Until that, I'd like to present how I currently think of named differentiation in my head: function objects can additionally to the traits above have a bijective map $\text{positions} \leftrightarrow \text{argument names}$. E.g. $f$ would have the map $\{1 \leftrightarrow \text{"}x\text{"}\}$. You could see this as an optional part of function types. Then, the expression $\frac{\mathrm{d}f}{\mathrm{d}x}$ is well-typed iff. the type of $f$ has such a map and that map contains an entry for $\text{"}x\text{"}$. I also find the other approaches in the other answers I skimmed over interesting. The everything-is-a-variable approach reminds me of probability theory and random variables. There, random varibles are also just defined on-the-fly like $X := Y + Z$ and then we just write $\mathrm{Pr}[X]$, where the probability is implicitly taken over all "argument dependencies" of $X$. 1 I just made up these terms. ComFreekComFreek $w$ represent the amount of wax burn. We could say that $w$ is a function of time. The quantity of wax burnt is strictly increasing and continuous. Suppose, you were walking home when your wife lit the candle. We could express your distance from home also as a function of time $x(t)$. This function is strictly decreasing and continuous. We could also express $w$ as a function of your distance from home! Then we could discuss the change in the quantity of wax burnt either with respect to a change in $t,$ or with respect to a change in $x.$ And $\frac {dw}{dx} = \frac {dw}{dt}\frac {dt}{dx}$ This is the basis of a set of "related rates" problems. When we get to multi-variable calculus it becomes more important to keep track what what variables are changing. If you have a surface $z(x,y)$ If we are walking across this surface at any given point we might be walking across the surface in such a way that $z$ is not changing, or we might be walking straight up hill. The direction of travel just as important as the rate of travel to measure changes in $z.$ And so, we should expect the case that $\frac {\partial z}{\partial x}$ is unrelated to $\frac {\partial z}{\partial y}$ Doug MDoug M $\begingroup$ I see that when we define $w$, we are thinking of putting time in to it, but I still don't really understand by what we mean by a function of time. When we mathematize our problem, isn't $w$ just a function of a single real variable in our problem, and we can interpret $w$ as wax burnt when we use the time since the candle was lit as our argument? I also still am confused about what we mean by $\frac{dw}{dx}$ and $\frac{dw}{dt}$, since I don't get what we mean by differentiation "with respect to" something. $\endgroup$ – Deeside Jan 6 at 1:05 $\begingroup$ when we say $w$ is a function of time. We are stating that we have created a mathematical model of a physical phenomenon, for which the the variable $t$ will represent the a measure of time. When we differentiate with respect to $t$ we are saying that we expect that $w$ will change as $t$ changes. Yes, the choice of variable name $t$ is arbitrary. We could substitute the variable $m,$ but $m$ would still represent time. Or we could introduce a new concept that is also related to w, and describe w in terms of this new concept. $\endgroup$ – Doug M Jan 6 at 1:14 $\begingroup$ Note that in the equation $\frac{dw}{dx} = \frac{dw}{dt} \frac{dt}{dx}$, the same letter $w$ is being used to refer to two different functions. This is a common abuse of notation, but I think it causes confusion. $\endgroup$ – littleO Jan 6 at 2:10 $\begingroup$ @Deeside Instead of saying "$w$ is a function of time", it would perhaps be more clear if they said, "$w$ is the function that takes a number $t$ as input and returns the amount of wax burnt after $t$ seconds as output." $\endgroup$ – littleO Jan 6 at 2:15 $\begingroup$ @littleO: Please take a look at my answer; we can set up a rigorous formal framework for differentiation in which the chain rule in Leibniz form is 100% correct (if the derivatives exist) with no abuse of notation. It is however true that there is abuse of notation in many existing textbooks. $\endgroup$ – user21820 Jan 6 at 13:40 I worry that your words and comments suggest you are conflating the system of study, the model of the system of study, and abstractions of the model. The particular ambiguities you describe come from mixing among these categories. Let us parse your wax burning example. System, model, abstraction, interpretation, and semantics The System: We have a candle made of wax. It burns. At various times, we measure the cumulative wax burnt. (Perhaps we actually measure some other physical property and infer the cumulative wax burnt from this measurement. This is an experimental detail that does not concern us further.) The Model: Let $w$ be the quantity of cumulative wax burnt, $t$ be the time, $t_0$ be the time the burning started, and $t_1$ be the time the burning stopped. From the nature of burning in the system, $w$ is a continuous function of $t$. (This is not a mathematical claim. It is syntactically equivalent to "The quantity of cumulative wax burnt is a continuous function of the time", a statement about the physics of burning.) On theoretical grounds, $w$ is constantly zero before $t_0$, $w$ increases at a constant rate with respect to $t$ between times $t_0$ and $t_1$, and $w$ is constant for all times $t_1$ and later. During the time that $w$ increases at a constant rate with respect to $t$, we use the positive real parameter $a$ to denote the constant rate. (A critical property of the model is that it attaches symbols to the quantities of interest in the system. Without this, the symbols and inferences appearing in the upcoming abstraction can never be related to the system. Additionally, any symbol used other than $w$, $t$, $t_0$, $t_1$, and $a$ cannot be attached to the system unless it is defined in terms of those symbols.) (Notice that the model asserts "$w(t)$" will be physically meaningful, since the model asserts that the physical system is a process that converts time into cumulative burnt wax. "$t(w)$" will not be physically menaingful, since the physical system is not modeled as a process that converts cumulative burnt wax into time.) The Abstraction: Let $T \subset \Bbb{R}$ be the minimal closed real interval containing the values of $t$ in the model and $W \subset \Bbb{R}$ be the minimal closed real interval containing the values of $w$ in the model. We have $w:T \rightarrow W$ defined by $$ w(t) = \begin{cases} 0 ,& t \leq t_0 \\ a t ,& t_0 < t < t_1 \\ a t_1 ,& t_1 \leq t \end{cases} $$ with real-valued parameter $a > 0$ . (There are no quantities in the abstraction. There is no time, no wax burnt, nothing about the experiment here. In fact, the abstraction is only attached to the experiment through the model so that the abstraction does not express anything about the system except what can be expressed through the symbolism established in the model.) Alright, having performed that exercise, how can we find answers to your questions? The experiment establishes that we will have a relation between the cumulative wax burnt and time. The construction of the experiment is such that for each time of measurement, there will be a single quantity of cumulative wax burnt. Since each time has a single quantity of cumulative wax burnt, we model cumulative wax burnt as a function (contra. relation) of time. In the abstraction, $w$ is a map from the real values which can be times to the real values that can be quantities of cumulative wax burnt. This is the sequence of steps that we use to express "cumulative wax burnt as a function of time", "$w$ as a function of $t$", and then $w:T \rightarrow W$. This sequence of steps means that we have an interpretation of expressions "$w(X)$" in the system, as long as $X$ is an element of $T$. If $X \not\in T$, "$w(X)$" is undefined in the abstraction and has no interpretation in the system. In the abstraction, we can certainly differentiate $w(t)$ with respect to $t$ and obtain a piecewise function, $\frac{\mathrm{d}}{\mathrm{d}t} w(t) : T \smallsetminus \{t_0, t_1\} \rightarrow \{0,a\}$. But this is not the only thing we can do. In the abstraction, we can differentiate $w(t^2)$ with respect to $t$ and get $$ \frac{\mathrm{d}}{\mathrm{d}t} w(t^2) = \left. \frac{\mathrm{d}}{\mathrm{d}s} w(s) \right|_{s = t^2} \cdot 2t \text{.} $$ In the abstraction, this is only valid for $t \in T$ where $t^2 \in T$. In the model, this is invalid : $t^2$ is not a time, it is a squared time; the model $w$ is a function of time, not squared time. So this calculation does not have an interpretation in the system. So the short version is: in the abstraction, we are free to perform any valid mathematical manipulation we like. Such manipulations either satisfy the semantics established by the model and have an interpretation in the system or do not satisfy the semantics so do not have an interpretation. We can, in fact, write many things at the level of the abstraction, but to have an interpretation in the system, such writings must conform to the model. Interpreting a function by altering its inputs There is a particular abuse of this notion in Physics that may be enlightening. I'll establish up front that this example is exactly the opposite of what mathematicians prefer, and I think much of your question lies in the range between these two positions. Say I have modeled a physical system as a function $f$ of position on a plane. For whatever reason, it is convenient to model position on a plane using Cartesian coordinates, with $x$ as the horizontal coordinate and $y$ as the vertical coordinate, and also using polar coordinates, with $r$ as the radial coordinate and $\theta$ as the azimuthal coordinate. Note that the language of the model assigns the same interpretation to $f(x,y)$ and $f(y,x)$ because $f$ is a function of position and we have established that a pair of $x$ and $y$ (defined to have distinguishable semantics) is a position. If the model associates the same position to one $x$ and $y$ pair as it does to one $r$ and $\theta$ pair, then the model also established the same interpretation in the system to all four of $f(x,y)$, $f(y,x)$, $f(r,\theta)$, and $f(\theta, r)$. These equivalences are in the model, not the abstraction. But notice that this supplies an unambiguous interpretation to the question "What is the derivative of $f(x,y)$ with respect to $\theta$?" which interpretation very likely necessitates the answer is not zero. When we pass from the model to the abstraction, we will fix the order of the arguments to $f$ so that $f(x,y)$ has an interpretation and $f(y,x)$ does not. Likewise we interpret $f(r,\theta)$ and not $f(\theta,r)$. (But, it is worth noting, we are free to abstractualize the order of arguments in whichever way is more convenient.) Now to the difference between physics and math. A physicist looks at the two abstraction expressions $f(x,y)$ and $f(r,\theta)$ and sees the same $f$ as a function of position. A mathematician looks at the two abstraction expressions $f(x,y)$ and $f(r,\theta)$ and sees "the same procedure applied to the ordered pairs $(x,y)$ and $(r,\theta)$". These are very different interpretations of the same abstraction expressions. As a result, the answer to the question "What is the derivative of $f(x,y)$ with respect to $\theta$?" differs. For a physicist, one is asking how $f$ varies as its input is varied azimuthally near the Cartesian point $(x,y)$. For the mathematician, the answer is zero until we augment the model with a relation $(x,y) \leftrightarrow (r,\theta)$. (Those parenthetical lists are model positions, not abstraction ordered pairs.) Once that augmentation is in place, the mathematician interprets the question as "What is the derivative of $f(x(r,\theta),y(r,\theta))$ with respect to $\theta$?", implicitly using the model position to position relation to write Cartesian coordinates as a function of polar coordinates. The mathematician is likely to go one step further and write something like $$ \tilde{f}(r,\theta) = f(x(r,\theta),y(r,\theta)) $$ to establish in the abstraction an explicit symbolic difference between the model $f$ that is a function of Cartesian coordinates and the model $f$ that is a function of polar coordinates. Then the question is translated to "What is the derivative of $\tilde{f}(r,\theta)$ with respect to $\theta$ expressed in terms of $x$ and $y$?" I've actually been a little harsh in the above. Both viewpoints can be unified if we do not rush to coordinates. We could represent positions as vectors in a 2-dimensional real vector space in the abstraction, denoted $\vec{v}$. Then the only expression to consider is $f(\vec{v})$. Augmenting the abstraction by defining at each $\vec{v}$ a collection of four tangent vectors in the positive horizontal, positive vertical, positive radial, and positive azimuthal directions, all the apparent ambiguity in the above vanishes. This more accurately models the system, with $f$ as a function of position, not as a function of ordered coordinates relative to some basis that is not dictated by the system. (Clearly. Because the model has two sets of coordinate systems.) In attaching an abstraction to a system, we assign semantics to particular abstract expressions via a model. We are free to write any abstract expression we want, but such expressions need not have an interpretation relative to the semantics established by the model. The system relation "quantity one is measured with respect to quantity two" can be modeled as "$c$ represents quantity one, $d$ represents quantity two, and $c$ is a function of $d$". That model relation is then translated to the abstraction "$D$ is a set containing values of $d$, $C$ is a set containing values of $c$, and we have the function $f:D \rightarrow C:d \mapsto \dots$". This $f$ has the semantics endowed by the model of being a function of quantity two. We may abstractly treat this $f$ as a function of any abstract symbol. However, we risk losing an interpretation relative to the system if we do not write $f$ as a function of an expression have the interpretation of quantity two. We are abstractly allowed to differentiate this $f$ with respect to any expression, but we risk losing an interpretation relative to the system if we do not differentiate with respect to an expression having the interpretation of a quantity two. Eric TowersEric Towers Not the answer you're looking for? Browse other questions tagged calculus derivatives terminology or ask your own question. When is the derivative of an inverse function equal to the reciprocal of the derivative? Rigorous Definition of "Function of" Real life situation for an implicit function Using Herbrand's theorem to prove completeness in first-order logic Deriving separable differential equation when velocity is function of position Differentiating expressions which aren't functions Differentiating both sides with respect to time. Differentiate complex function with absolute values What is the mean of rate of change of a function with respect to the domain of some other function, as in: $\frac{dFz}{dy}$ What does $\frac{dx}{dx}=1$ mean? Differentiation under an integral with respect to a function Differentiate an integral function Derivatives 101: what does "with respect to" mean? What is the theory behind differentiating functions with respect to variables other than the independent variables of that function? Differentiate function with respect to multiple variables? In the derivative notation $f'(x)$, does the $(x)$ mean "with respect to $x$" or something else?
CommonCrawl
Journal of Cluster Science January 2019 , Volume 30, Issue 1, pp 235–242 | Cite as Laser-Induced Nuclear Processes in Ultra-Dense Hydrogen Take Place in Small Non-superfluid HN(0) Clusters Leif Holmlid Charged and neutral kaons are formed by impact of pulsed lasers on ultra-dense hydrogen H(0). This superfluid material H(0) consists of clusters of various forms, mainly of the chain-cluster type H2N. Such clusters are not stable above the transition temperature from superfluid to normal matter. In the case studied here, this transition is at 525 K for D(0) on an Ir target, as reported previously. Mesons are formed both below and above this temperature. Thus, the meson formation is not related to the long chain-clusters H2N but to the small non-superfluid cluster types H3(0) and H4(0) which still exist on the target above the transition temperature. The nuclear processes forming the kaons take place in such clusters when they are transferred to the lowest s = 1 state with H–H distance of 0.56 pm. At this short distance, nuclear processes are expected within 1 ns. The superfluid chain-cluster phase probably has no direct importance for the nuclear processes. The clusters where the nuclear processes in H(0) take place are thus quite accurately identified. Ultra-dense hydrogen Superfluid Picometer Transition temperature The nuclear processes taking place in ultra-dense hydrogen H(0) under pulsed-laser impact give mesons showers [1]. These showers are similar to those from nucleon-antinucleon annihilation processes [2]. Nuclear processes in H(0) are possible due to the short distance between the nuclei in the ultra-dense form: at the most common spin level s = 2 [3, 4, 5] the H–H distance is 2.245 ± 0.003 pm (recalculated from accurate data with s = 3) [6]. At the lowest spin level s = 1, the distance is only 0.56 pm [3]. This distance is expected to give nuclear reactions within 1 ns, as known from studies of muon catalyzed fusion [7] where the internuclear distance is 0.51 pm (106/207 pm). The details of the nuclear process in H(0) need to be studied, so that the process can be optimized: it gives the possibility of energy production with a previously unknown efficiency, hundred times better than ordinary fusion. Here, the H(0) clusters in which the nuclear processes take place are identified. It is concluded that the long superfluid chain-type clusters H2N are not directly involved. Instead, small non-superfluid clusters like H3(0) and H4(0) [8] which have no specific axis give the nuclear processes after transfer to the lowest spin state with spin s = 1 [3]. An ultra-dense form of Rydberg matter [3, 4, 5, 9, 10] has been studied during a few years time. These studies have identified two slightly different forms, ultra-dense protium p(0) [9] and ultra-dense deuterium D(0) [3, 10]. In ordinary Rydberg matter (RM) [5], the electron orbital angular momentum l is always > 0. In the ultra-dense material which can only be formed from hydrogen, l = 0; its structure is instead given by the spin angular momentum s > 0. This quantum number was identified from laser-induced time-of-flight (TOF) and time-of-flight mass spectrometry (TOF–MS) studies [4, 10, 11, 12] to have values s = 1, 2 or 3, giving an interatomic distance of only 0.56 pm in level s = 1 [3]. Recently, also optical spectroscopy was used to give much more precise bond distances for levels s = 2, 3 and 4 [6, 13]. The density of the ultra-dense material is around 1029 cm−3, corresponding to a theoretical D–D distance at s = 2 of 2.23 pm [3] and the normal experimental distance in the TOF–MS studies of 2.3 ± 0.1 pm [4, 10, 11, 12]. Most experiments with ultra-dense hydrogen have in fact studied ultra-dense deuterium D(0) due to its slightly simpler structure, with less interaction between the nuclei due to their Boson properties. The quantum mechanical basis for D(0) was first discussed by Winterberg [14, 15], pointing out the similarity to other superfluids [16]. Theory [17, 18, 19] predicts a dense deuterium phase with both superfluid and superconducting properties. Experimental studies of clusters of ultra-dense hydrogen H(0) show that they are chain clusters of the form H2N with N an integer. This form is shown at the top in Fig. 1. Each H2 "bead" is formed by a pair H–H which rotates around the cluster axis. Each such cluster shows a Meissner effect, thus it floats in a static magnetic field [8, 20]. This is characteristic for a superconducting material. Hirsch [21] describes the superconducting state of a material as having large Rydberg-like orbits. This is similar to an ordinary RM cluster state with l > 0, but here with the plane of the orbit given by the magnetic field direction, not by the geometry of the cluster as in the case of an ordinary (orbital angular momentum-based) RM cluster. Only one or a few of the electrons in each cluster are simultaneously in such high Rydberg-like states. These electrons are the ones most easily displaced by the laser-pulse field. Shape of the chain or "bead" clusters H2N(0) forming the superfluid phase H(0) and non-superfluid cluster types H3(0) and H4(0) Both forms of H(0), ultra-dense protium p(0) [9] and ultra-dense deuterium D(0) [3, 10], are superfluid at room temperature [22] as shown for example by a fountain effect and by the fast laser energy transport. They also show a Meissner effect at room temperature, which means that the long chain-like clusters lift in a static magnetic field [8, 20]. This effect is less pronounced for p(0) [20], presumably due to its slightly more complex cluster structure compared to D(0) [8, 9]. This difference is probably due to the formation of proton pairs which gives three protons p3 coupled together instead of just a proton pair p2 in a "bead" [9]. Due to the large difference in scale between the ultra-dense material and the metal carrier surface (typically 2 pm instead of 200 pm for the carrier), many novel effects are expected. It means for example that an entire chain cluster H2N may fit in between two metal atoms on the surface. In the Meissner experiments using D(0) [8], it was clearly observed that small clusters D3(0) and D4(0) do not float in the magnetic field, thus they do not show a Meissner effect. Long chain clusters D2N on the other hand float in the static magnetic field. The small clusters do not have a long axis due to their symmetry as shown at the bottom in Fig. 1. It was concluded in Ref. [22] that such small clusters probably do not form a superfluid layer on the metal carrier surface used in the experiments. These results indicate that a material formed from such small symmetric ultra-dense clusters will not have superfluid or superconductive properties. In the TOF studies, these small clusters give a signal clearly different from the long chain-clusters. This is here used to identify the clusters in which the nuclear processes take place. The decay signal is measured at two collectors (or one coil) in the beam ejected from the pulsed-laser interaction with ultra-dense hydrogen H(0). The laser pulse has a typical width of 7 ns, with a rise-time of 3 ns. The signal at the collectors (and at the coil) is in the form of a pulse with quite well-defined rise and fall times. This means that the signal is due to an intermediate particle M which is formed and decays like A → M → N [23]. Often, approximately the same time dependence is observed at both collectors, which means that slow decaying particles M at the target give the fast detected particles N at the collectors. The time dependence of the signal M is easily derived from the rate equations for the process \({\text{A}}\mathop{\longrightarrow}\limits^{{k_{1} }}{\text{M}}\mathop{\longrightarrow}\limits^{{k_{2} }}{\text{N}}\) [23] $$- \frac{{dn_{\text{A}} }}{dt} = k_{1} n_{\text{A}}$$ $$\frac{{dn_{\text{M}} }}{dt} = k_{1} n_{\text{A}} - k_{2} n_{\text{M}}$$ $$n_{\text{M}} = \frac{{k_{1} }}{{k_{2} - k_{1} }}n_{\text{A0}} (e^{{ - k_{1} t}} - e^{{ - k_{2} t}} )$$ where nA0 is the number density of the precursor at time t = 0 thus at the start of the laser pulse. This derivation puts the initial number density of the intermediate meson nM0 to zero. The curve shape in Eq. (3) is used to match the results below. The results are given as time constants τ = 1/k. The apparatus is shown in Fig. 2. It has been described in several publications, for example in Ref. [10]. It has a base pressure of < 1×10−6 mbar. The central source part is described in Ref. [4]. The emitter is a cylindrical (extruded) sample of an industrial iron oxide catalyst doped with K [24, 25], a so called styrene catalyst type Shell S-105 (obsolete type number). This type of catalyst is an efficient hydrogen abstraction and transfer catalyst. The source metal tube can be heated by an AC current through its wall up to 400 K. Deuterium gas (99.8%) is admitted through the tube at a pressure up to 1 × 10−5 mbar in the chamber. Horizontal cut through the apparatus The metal target is constructed for direct heating with a 50 Hz AC current. The carrier Ni foil with dimension 12 × 15 mm has a thickness of 0.2 mm. It is spot welded to two thinner foils of Ta with thickness 0.1 mm which carry the heating current. On the carrier, short Ir rods with 2 mm diameter are spot-welded. The carrier foil is mounted at 45° to the vertical direction and located approximately 1 cm below the source tip. The heating current through the carrier and its supporting foils is taken from an external ring transformer with a few turns of secondary winding. The temperature of the carrier is measured by a type K thermocouple (TC), spot welded at the upper half of the carrier foil. The cold end of the TC is at the screw support on the arms holding the carrier foil, at a distance of 20 cm from the carrier. A Nd:YAG laser with an energy of < 125 mJ in 5 ns long pulses at 10 Hz is used at 532 nm. The laser beam is focused on the carrier at the center of the chamber with an f = 400 mm spherical lens. The lens is mounted in a vertical motion translation stage. The intensity in the beam waist of (nominally) 70 µm diameter is relatively low, ≤ 1012 W cm−2 as calculated for a Gaussian beam. A dynode-scintillator-photomultiplier detector which measures the time-of-flight (TOF) spectra of the neutral and ionized flux from the laser initiated processes is shown in Fig. 2. This detector can be rotated around the carrier at the center of the chamber. Fast neutral particles impact on a steel catcher foil in the detector, and ions ejected from there (or in the beam) are drawn towards a Cu–Be dynode held at − 7.0 kV inside the detector. The total effective flight distance for the particles from the laser focus to the catcher foil is 101 mm by direct measurement and internal calibration [5, 10]. The photomultiplier (PMT) is Electron Tubes 9128B (single electron rise time of 2.5 ns and transit time of 30 ns). This PMT is covered by Al foil and black plastic tape giving only a small active cathode area of 2–3 mm2 to avoid saturation. Blue glass filters in front of the PMTs remove any scattered laser light. A fast preamplifier (Ortec VT120A, gain 200, bandwidth 10–350 MHz) is used. The signal from the PMT is sometimes collected on a fast digital oscilloscope (Tektronix TDS 3032, 300 MHz). The average function in the oscilloscope is used. A multi-channel scaler (MCS) with 5 ns dwell time per channel is used (EG&G Ortec Turbo-MCS) for the TOF–MS spectra. Each MCS spectrum consists of the sum of 500 laser shots. The laser-induced mass spectrometry used here is described in several publications [3, 4, 10, 11, 12]. Due to the very short bond distances in ultra-dense hydrogen and also in low levels of ordinary hydrogen Rydberg matter H(1) and H(2), the kinetic energy release (KER) given to the cluster fragments by the Coulomb explosions (CE) is quite high. It is also well-defined, due to the easy removal of the orbiting Rydberg electrons by the laser pulse, without any large changes of the structure before the fs long repulsion period between the fragments [26]. The total kinetic energy of the fragments gives directly their initial distance as $$r = \frac{1}{{4\pi \varepsilon_{0} }}\frac{{e^{2} }}{{E_{kin} }}$$ where ε0 is the vacuum permittivity, e the unit charge on the fragment ions and Ekin the sum kinetic energy for the fragments (KER) from the CE. The fraction of the KER that is observed on each fragment depends on the mass ratio of the fragments. The kinetic energy is determined by measuring the time-of-flight (TOF) of the fragments and then converting this quantity to kinetic energy. In the case of long H(0) chain clusters, a central fragmentation is often observed, thus giving two identical cluster fragments, each carrying half the total KER. The most common state of H(0) has s = 2 [3]. It has 2.3 pm H–H distance and gives a total KER of 640 eV. Part of the flux from the laser-induced processes on the target is taken out through a small opening in the chamber wall to a separately pumped and valved chamber with two collectors consisting of a metal wire loop covered with 1–2 layers of 20 µm thick Al foil. The loops have a diameter of 50 mm inside the tube with internal diameter 63 mm. The collectors are at a distance of 66 cm (inner collector) and 163 cm (outer collector) from the target as seen in Fig. 2. The signal to them is directly observed on a fast digital oscilloscope (Tektronix TDS 3032, 300 MHz). The oscilloscope is triggered by a pulse from a photodiode close to the laser. The diode is located such that the trigger delay in the cable to the oscilloscope is close to the time for the pulse to move to the target and through the cable to the oscilloscope. The error is estimated to be ± 1–2 ns [27]. The typical behavior of the signals is shown in several publications [1, 23]. The decay times of these signals are characteristic for the decaying mesons. The signal is normally due to fast (relativistic) muons arriving to the collectors from the relatively slow decaying mesons (kaons and pions). In some tests the inner collector is replaced by a wire coil wound on a ferrite toroid. This coil works as a current transformer for the particle beam current. The signal is due to charged fast particles, and photons and other neutral particles like neutral kaons cannot be observed by this coil. A direct comparison of the coil signal and the outer collector signal shows that the same particles are observed, thus proving that the outer collector signal is due to fast charged particles and not to photons or neutral particles. The measuring coil is wound on a N30 MnZn soft ferrite toroid with epoxy cover (EPCOS) with inner diameter 25 mm. There are 19 turns of enameled copper wire wound on it in the negative direction. The toroid hangs freely in the conductor wires in the particle beam. One end of the coil is in turn connected to the 50 Ω oscilloscope input, while the other end is connected to a 50 Ω BNC termination to the mounting flange. The sign of the beam current is determined by calibration using a current pulse in a single wire through the opening of the coil, to avoid possible sign errors. The temperature variation of the H(0) clusters on the foil target was studied with the inner detector shown in Fig. 2 in Ref. [28]. It was demonstrated that the long chain-clusters H2N(0) disappear at a transition temperature in the range 405–725 K, varying with the carrier material and the ultra-dense hydrogen form, being protium p(0) or deuterium D(0). Results for Ir as the surface and with D(0) as the ultra-dense hydrogen are shown in Fig. 3. The shortest TOF possible for different excitation levels in deuterium is shown in Table 1. These data are based on extensive studies [3, 10, 11, 12, 22]. The large chain-clusters D(0) are found in the time range 1–4 µs, only at temperatures below 525 K [28]. They give relatively large fragments, and not atoms D or atomic ions D+. The fastest part of the signal is D atoms from small D(0) clusters at 200–500 ns TOF. A sum of all distributions from all temperatures is shown in Fig. 4. A large part of this distribution comes from D(0) in the lowest level with s = 1, at a D–D distance of 0.56 pm. At such short distances, nuclear processes are expected to be fast [7]. A large peak at 20–40 ns in Fig. 4 also exists, which has not been assigned previously. It may be due to processes giving a typical nuclear based energy of 100 keV u−1. TOF spectra at 45° detector position relative to laser beam. D(0) on Ir surface. Temperature variation shown The shortest possible TOF for D (repelled from infinite mass) from symmetric and asymmetric CE processes with 2 and 3 charges in the D(RM) clusters [12] Excitation level KER (eV) CE 2+ D(l = 0, s = 1) 204 ns D(1) 3.4 µs 10.1 µs These times are indicated by vertical lines in the TOF spectra Sum of all the TOF spectra in Fig. 3, showing that the distribution peaking at 300–400 ns contains a large contribution from the lowest level s = 1 in which nuclear processes are expected The MeV TOF signal to the inner and outer collectors shown in Fig. 2 was studied in Ref. [1]. The collector signals have the time dependence of an intermediate particle in a decay chain [23], with the decay times equal to the meson lifetimes 12.4 ns for charged kaons K±, 26 ns for charged pions π± and 52 ns for long-lived neutral kaons \({\text{K}}_{L}^{0}\). In Fig. 5, the signal at the outer collector is shown to agree with the decay time for charged kaons, with the Ir target both cold and warm (at approximately 800 K). This temperature is considerably higher than the transition temperature for D(0) on Ir, which is 525 K [28]. Thus, the meson signal is formed also above the transition temperature where the chain-clusters are no longer stable. In the same experiment, the results in Fig. 6 were found for the inner collector. This signal is more complex and shows contributions from both charged pions and charged kaons. The signals at high and low temperature were identical, and thus only the high temperature signal is shown here. Since this signal is measured at relatively short distance from the target, it may also contain contributions from no yet decayed pions, not only from the muons formed by pion decay. This may give the slightly worse fit of the calculated curves in this case relative to Fig. 5 for example. Charged kaon signal at the outer collector in Fig. 2 with bias − 24 V. The meson signal exists even at a temperature of 800 K, above the D(0) on Ir transition temperature of 525 K [28] Combined charged kaon and pion signal at the inner collector in Fig. 2 with bias − 24 V. The meson signal exists even at a temperature of 800 K, above the D(0) on Ir transition temperature of 525 K [28] That the signals at the inner and outer collectors contain different contributions from mesons is often observed, and it is difficult to control the various meson contributions at the nuclear level to the meson shower in the experiments. This effect may sometimes be due to the different energies and angular widths of the signal distributions finally decaying to the muons observed as a signal current to the collectors. The initially formed kaons (neutral and charged) have quite low kinetic energy while the pions which are mainly formed from decaying neutral kaons [29, 30, 31] have much higher transverse translational energy. This means that some of them move out from the beam: their muon decay products may thus not reach the outer collector with the same probability. However, the main charged kaon decay channel gives directly fast muons [29, 30, 31] which may be more easily observed at the outer collector. The important point is that mesons are formed even at high target temperature: thus they are formed from the small D(0) clusters that are the only D(0) clusters which exist on the target at high temperature. The only uncertainty in the collector measurements is the exact (at least the dominant) sign of charge of the observed particles. It may also be important to exclude that this signal is due to photons. These problems have been solved by replacing the inner collector with a coil for a positive (direct) observation of the particle current in the beam. The difference between the signals at the two ends of the coil is the particle current signal. Results are shown in Figs. 7 and 8. In these figures (coil and outer collector measured in the same experiment) the signal in this case is due to the sum of charged kaons and charged pions decaying to muons, in a chain K± → π± → µ± [29, 30, 31]. (Note that the pions π± are allowed to have a rise time of 12 ns equal to the decay time of the kaons K± as required for this decay chain). That the signal due to pions is lower in the coil than at the collector may be due to the lower kinetic energy of the muons from pion decay (maximum 139 MeV) relative to the muons from the direct kaon decay (not via pions, maximum 493 MeV). Since the amplitude of the signal in the coil varies with the velocity of the charges passing through, the kaon generated muons may give a larger induced signal per charge. The important points are that charged particles (pions and kaons) are indeed observed at the collector and that the collector signal is not due to photons. The sign of the current is verified to be positive from separate tests with a wire through the coil. The agreement between the meson decay results at high and low target temperature and the coil measurements is reassuring. Current coil signal with D(0) on target. Coil at position of inner collector in Fig. 2. Note that the pions π± have rise time of 12 ns equal to the decay time of the kaons K± as required for the decay chain K± → π± → µ±. The actual particle current is probably due to muons Signal at outer collector with bias − 24 V with the coil in the beam in the same experiment as in Fig. 7. Note that the pions π± have rise time of 12 ns equal to the decay time of the kaons K± as required for the decay chain K± → π± → µ±. The actual signal is probably due to muons giving secondary electron ejection at collector The laser experiments sample the surface layer of D(0) on the target. Two different types of processes are initiated by the laser pulse. The first type of process is Coulomb explosions (CE) in the D(0) clusters [4, 5, 10] giving ions and atoms at maximum kinetic energy of 2.6 keV as shown in Figs. 3 and 4 and Table 1. This process is initiated by the laser photons which remove electrons from the clusters (mainly the highly excited superconductive electrons in Rydberg-like orbits [21]). The second type of process is nuclear giving meson ejection followed by meson decay [1, 27, 32, 33], with final particle kinetic energies in the range up to 500 MeV. This process is instead initiated by the transfer of the clusters from spin level s = 2 to s = 1 which is the lowest level in D(0) having an internuclear distance of only 0.56 pm [10]. It is possible that these two processes exist at different locations relative to the center of the laser beam, with the nuclear processes closer to the beam center and the CE processes in the outskirts of the laser beam. In general, the best CE experiments (TOF and TOF–MS) are done at lower laser intensity than the best meson experiments, so they are normally not performed under identical laser intensity conditions. The difference in laser intensity is however less than a factor of two, which means that the changes in the surface layer between these two experimental types are rather small. Thus, the results from these two types of experiment can be compared to each other. The observed short interatomic H–H distances in the pm range [4, 5, 10] are also related to magic numbers and super heavy elements as found by Prelas et al. [34]. The idea is that deuterons at the common 2 pm interatomic distance (which corresponds to s = 2) may build Bose–Einstein clusters [34]. The typical total particle number ejected in the meson experiments is 1012–1013 per laser pulse [1, 35], with energy (velocity) of 10–500 MeV u−1. The total number of particles ejected in the CE experiments is more difficult to estimate due to the much more complex signal conversion chain needed for the ejected low energy ions and atoms, in the typical initial energy range of only 320–640 eV. This number is here estimated to be in the range of 106–109 per laser pulse. Numerous particles are formed with lower energy than 300 eV, but they are not included here. The conclusion that the mesons originate from the small H(0) clusters is also based on the assumption that no other type of D(0) is still stable at high temperature and can be excited by the laser pulse to give mesons. It could be possible that some D(0) cluster type exists inside the metal surface layer, and that the laser pulse can excite them to form mesons even if it is not easily detectable by the CE experiments: the CE experiments may use less laser power to dig into the metal surface. It can be observed that the higher Rydberg matter level D(1) and maybe even D(2) exist on the surface above the transition temperature for D(0). Thus, it is only the long chain-clusters of D(0) which become unstable above the transition temperature which is expected and agrees with the superfluid properties. However, a fast transformation of D(1) with 150 pm interatomic distance [5] to D(0) in s = 1 with 0.56 pm internuclear distance, required for the meson ejection, seems unlikely. It is apparent that the small D(0) non-superfluid clusters form another surface phase than the super-fluid long-chain D(0) clusters. In Fig. 3, the signal of the small clusters at 200–500 ns TOF does not increase with increasing temperature at the transition temperature. This shows that the long-chain clusters do not dissociate to small clusters rapidly but rather decompose to atoms or pairs of atoms which are primarily incorporated in the D(1) cluster structure which seems to increase at high temperature. Of course, the processes in the surface layer must be understood to be fast processes close to equilibrium, catalyzed by the metal surface. Alternatively, the dissociating long-chain clusters could be thought to diffuse into the metal surface; however such a state will still be in close equilibrium with the clusters on the surface. Thus, diffusion into the bulk surface is not believed to be an important or independent process. The nuclear processes giving the meson formation should also be discussed. Both the kaons and pions observed contain two quarks, while the nucleons in the deuterons each contain three quarks. At present, the best suggestion for the nuclear processes taking place is that two protons with total six quarks form three mesons. This is energetically possible since two protons have a total mass of 2 × 938 MeV = 1876 MeV and three kaons have approximately 3 × 495 MeV = 1485 MeV [29, 30, 31]. Thus, this process has an excess energy of 361 MeV or 120 MeV per kaon formed, which is quite realistic and in agreement with experimental results. One type of such a process requires the transformation of d quarks to s quarks (or d to \({\bar{\text{s}}}\)), which is possible by the weak interaction [29]. Due to the long interaction time from nanoseconds to days at < 1 pm distance, such processes ought to exist. In this case, mainly charged kaons may be formed \(({\text{K}}^{ + } = {\text{u}}_{{{\bar{s}}}} ,\;{\text{K}}^{ - } = {\text{s}}_{{{\bar{u}}}} )\) and not so many neutral kaons. This agrees with the present experimental results. Since positive charge dominates in the present experiments, \({\text{K}}^{ + } = {\text{u}}_{{{\bar{s}}}}\) should be formed preferentially from the two protons, possibly as \({\text{K}}^{ + } + {\text{K}}^{ + } + {\text{K}}_{L}^{0}\). The meson-ejecting nuclear processes in D(0) take place in small D(0) clusters with typically 3–4 atoms. These clusters do not form a superfluid phase on the metal target surface and are not directly coupled to the long-chain D(0) clusters which form a superfluid phase with a Meissner effect as reported previously. The actual D(0) cluster shape which supports the laser-induced nuclear processes in D(0) is thus identified. This study would not have been possible without the support of Dr. Bernhard Kotzias at Airbus DS, Bremen, Germany. Parts of the equipment used for the experimental studies have been built with the support of GU Ventures AB, the holding company at the University of Gothenburg. L. Holmlid (2017). PLOS ONE 12, e0169895. https://doi.org/10.1371/journal.pone.0169895.CrossRefGoogle Scholar E. Klempt, C. Batty, and J.-M. Richard (2005). Phys. Rep. 413, 197.CrossRefGoogle Scholar L. Holmlid (2013). Int. J. Mass Spectrom. 352, 1.CrossRefGoogle Scholar P. U. Andersson, B. Lönn, and L. Holmlid (2011). Rev. Sci. Instrum. 82, 013503.CrossRefGoogle Scholar L. Holmlid (2012). J. Clust. Sci. 23, 95.CrossRefGoogle Scholar L. Holmlid (2017). J. Mol. Struct. 1130, 829. https://doi.org/10.1016/j.molstruc.2016.10.091.CrossRefGoogle Scholar D. V. Balin, V. A. Ganzha, S. M. Kozlov, E. M. Maev, G. E. Petrov, M. A. Soroka, G. N. Schapkin, G. G. Semenchuk, V. A. Trofimov, A. A. Vasiliev, A. A. Vorobyov, N. I. Voropaev, C. Petitjean, B. Gartner, B. Lauss, J. Marton, J. Zmeskalc, T. Case, K. M. Crowe, P. Kammel, F. J. Hartmann, and M. P. Faifman (2011). Phys. Part. Nuclei 42, 185.CrossRefGoogle Scholar P. U. Andersson, L. Holmlid, and S. R. Fuelling (2012). J. Supercond. Novel Magn. 25, 873.CrossRefGoogle Scholar L. Holmlid (2013). Int. J. Mass Spectrom. 351, 61.CrossRefGoogle Scholar S. Badiei, P. U. Andersson, and L. Holmlid (2010). Phys. Scripta 81, 045601.CrossRefGoogle Scholar S. Badiei, P. U. Andersson, and L. Holmlid (2010). Appl. Phys. Lett. 96, 124103.CrossRefGoogle Scholar F. Winterberg (2010). J. Fusion Energy 29, 317.CrossRefGoogle Scholar F. Winterberg (2010). Phys. Lett. A 374, 2766.CrossRefGoogle Scholar T. Guénault Basic Superfluids (Taylor & Francis, London, 2003).CrossRefGoogle Scholar L. Berezhiani, G. Gabadadze, and D. Pirtskhalava (2010). J. High Energy Phys. 2010, 122. https://doi.org/10.1007/JHEP04(2010)122.CrossRefGoogle Scholar P. F. Bedaque, M. I. Buchoff, and A. Cherman (2011). J. High Energy Phys. 2011, 94. https://doi.org/10.1007/JHEP04(2011)094.CrossRefGoogle Scholar E. Babaev, A. Sudbø, and N. W. Ashcroft (2004). Nature 431, 666.CrossRefGoogle Scholar L. Holmlid and S. R. Fuelling (2015). J. Cluster Sci. 26, 1153.CrossRefGoogle Scholar J. E. Hirsch (2012). Phys. Scr. 85, 035704.CrossRefGoogle Scholar P. U. Andersson and L. Holmlid (2011). Phys. Lett. A 375, 1344.CrossRefGoogle Scholar L. Holmlid (2015). Int. J. Modern Phys. E 24, 1550026.CrossRefGoogle Scholar G. R. Meima and P. G. Menon (2001). Appl. Catal. A 212, 239.CrossRefGoogle Scholar M. Muhler, R. Schlögl, and G. Ertl (1992). J. Catal. 138, 413.CrossRefGoogle Scholar S. Badiei, P. U. Andersson, and L. Holmlid (2009). Int. J. Mass Spectrom. 282, 70.CrossRefGoogle Scholar L. Holmlid and B. Kotzias (2016). AIP Adv. 6, 045111.CrossRefGoogle Scholar W. E. Burcham and M. Jobes Nuclear and Particle Physics (Pearson, Harlow, 1995).Google Scholar C. Nordling and J. Österman Physics Handbook (Studentlitteratur, Lund, 1988).Google Scholar K. S. Krane Introductory Nuclear Physics (Wiley, Hoboken, 1988).Google Scholar M. A. Prelas, H. Hora, and G. H. Miley (2014). Phys. Lett. A 378, 2467.CrossRefGoogle Scholar L. Holmlid (2013). Laser Part. Beams 31, 715.CrossRefGoogle Scholar 1.Atmospheric Science, Department of Chemistry and Molecular BiologyUniversity of GothenburgGöteborgSweden Holmlid, L. J Clust Sci (2019) 30: 235. https://doi.org/10.1007/s10876-018-1480-5
CommonCrawl
Solutions to Fall 2015 Practice Final Exam 1 Stony Brook Physics phy131studiof15:lectures PHY132 Studio Fall 2015 2015/12/02 09:36 mdawber [Question 6] 2015/12/02 09:36 mdawber [Question 6] 2015/12/02 09:36 mdawber [Question 6] 2015/12/02 09:34 mdawber [Question 5] 2015/12/02 09:30 mdawber [Question 4] 2015/12/02 09:29 mdawber [Question 3] 2015/12/02 09:27 mdawber [Question 2] 2015/12/02 09:26 mdawber [Question 1] 2015/12/02 09:26 mdawber [Question 2] 2015/12/02 09:23 mdawber [Question 1] 2015/12/02 08:50 mdawber [Question 5] 2015/11/30 14:47 mdawber created phy131studiof15:lectures:finalp1sol [2015/12/02 09:23] mdawber [Question 1] phy131studiof15:lectures:finalp1sol [2015/12/02 09:36] (current) ===== Question 1 ===== ===== Question 1 ===== + {{phy141f12finalq1fig.png}} A satellite in a circular geostationary orbit above the Earth'​s surface (meaning it has the same angular velocity as the Earth) explodes in to three pieces of equal mass. One piece remains in a circular orbit at the same height as the satellite'​s orbit after the collision, but the velocity of this piece is in the opposite direction to the velocity of the satellite before the explosion. ​ After the explosion the second piece falls from rest in a straight line directly to the Earth. The third piece has an initial velocity in the same direction as the original direction of the satellite, but now has a different magnitude of velocity. The diagram is obviously not remotely to scale! A satellite in a circular geostationary orbit above the Earth'​s surface (meaning it has the same angular velocity as the Earth) explodes in to three pieces of equal mass. One piece remains in a circular orbit at the same height as the satellite'​s orbit after the collision, but the velocity of this piece is in the opposite direction to the velocity of the satellite before the explosion. ​ After the explosion the second piece falls from rest in a straight line directly to the Earth. The third piece has an initial velocity in the same direction as the original direction of the satellite, but now has a different magnitude of velocity. The diagram is obviously not remotely to scale! - A. The condition for when static friction and the gravitational force down the plane exactly cancel is given by + {{phy141f12finalq2fig.png}} + An inclined plane has a coefficient of static friction $\mu_{s}=0.4$ and kinetic friction $\mu_{k}=0.3$. + A. (5 points) What is the maximum inclination angle $\theta$ for which a stationary block on the plane will remain at rest? + The condition for when static friction and the gravitational force down the plane exactly cancel is given by $mg\sin\theta=\mu_{s}mg\cos\theta$ $mg\sin\theta=\mu_{s}mg\cos\theta$ $\theta=21.8^{o}$ $\theta=21.8^{o}$ - B. For rolling + B. (10 points) ​ If a ball ($I=\frac{2}{5}Mr^{2}$) is placed on the incline when it has the inclination angle you found in part (a) and rolls through a distance of 1.5m, what is the translational velocity of the ball when it reaches the bottom of the incline? + For rolling $KE=\frac{1}{2}mv^{2}+\frac{1}{2}I\omega^{2}=\frac{1}{2}mv^{2}+\frac{1}{2}\frac{2}{5}mr^{2}\frac{v^{2}}{r^{2}}=\frac{7}{10}mv^{2}$ $KE=\frac{1}{2}mv^{2}+\frac{1}{2}I\omega^{2}=\frac{1}{2}mv^{2}+\frac{1}{2}\frac{2}{5}mr^{2}\frac{v^{2}}{r^{2}}=\frac{7}{10}mv^{2}$ $v=2.79\mathrm{m\,​s^{-1}}$ $v=2.79\mathrm{m\,​s^{-1}}$ - C. $\frac{\frac{2}{5}}{\frac{2}{5}+1}=0.29=29\%$ + C. (10 points) What percentage of the kinetic energy of the ball is rotational while it is rolling down the incline? + $\frac{\frac{2}{5}}{\frac{2}{5}+1}=0.29=29\%$ - A. + {{m1fig2_2011.png}} + A plane is flying horizontally with a constant speed of 100m/s at a height $h$ above the ground, and drops a 50kg bomb with the intention of hitting a car that has just begun driving up a 10$^{\circ}$ incline which starts a distance $l$ in front of the plane. The speed of the car is a constant 30m/s. For the following questions use the coordinate axes defined in the figure, where the origin is taken to be the initial position of the car. + A. (5 points) What is the initial velocity of the bomb relative to the car? Write your answer in unit vector notation. $v_{x}=100-30\cos(10^{o})=70.46\mathrm{ms^{-1}}$ $v_{x}=100-30\cos(10^{o})=70.46\mathrm{ms^{-1}}$ $\vec{v}=70.46\mathrm{ms^{-1}}\hat{i}-5.21\mathrm{ms^{-1}}\hat{j}$ $\vec{v}=70.46\mathrm{ms^{-1}}\hat{i}-5.21\mathrm{ms^{-1}}\hat{j}$ - B. + B. (5 points) Write equations for both components ($x$ and $y$) of the car's displacement as a function of time, taking t=0s to be the time the bomb is released. $x=30\cos(10^{o})t=29.54t\,​\mathrm{m}$ $x=30\cos(10^{o})t=29.54t\,​\mathrm{m}$ $y=30\sin(10^{o})t=5.21t\,​\mathrm{m}$ $y=30\sin(10^{o})t=5.21t\,​\mathrm{m}$ - C. + C. (5 points) Write equations for both components ($x$ and $y$) of the bomb's displacement as a function of time, taking t=0s to be the time the bomb is released. $x=100t-l\,​\mathrm{m}$ $x=100t-l\,​\mathrm{m}$ $y=h-\frac{1}{2}gt^{2}\,​\mathrm{m}$ $y=h-\frac{1}{2}gt^{2}\,​\mathrm{m}$ - D. + D. (5 points) ​ If the bomb hits the car at time t=10s what was the height of the plane above the ground $h$ when it dropped the bomb? $y=52.1\mathrm{m}$ $y=52.1\mathrm{m}$ $h=52.1+50\times9.81=542.61\mathrm{m}$ $h=52.1+50\times9.81=542.61\mathrm{m}$ - E. + E. (5 points) What is the horizontal displacement of the plane relative to the car when the bomb hits the car at t=10s. $0\mathrm{m}$ $0\mathrm{m}$ - F. + F. (5 points) How much kinetic energy does the bomb have when it hits the car? $\frac{1}{2}mv_{0}^{2}+24525=\frac{1}{2}\times50\times100^{2}+240590=250000+240590=490590\mathrm{J}$ $\frac{1}{2}mv_{0}^{2}+24525=\frac{1}{2}\times50\times100^{2}+240590=250000+240590=490590\mathrm{J}$ - A. $\pi \frac{d^{2}}{4}h=0.2$ + {{phy141f12finalq4fig.png}} + A 90cm tall cylindrical steel oil drum weighs 15kg and has a external volume of 0.2m$^{3}$. When full the drum contains 190L of crude oil. The density of crude oil is 900kg/​m$^{3}$ and of water is 1000kg/​m$^{3}$. + A. (5 points) What is the external diameter of the oil drum? + $\pi \frac{d^{2}}{4}h=0.2$ $d=0.53\mathrm{m}$ $d=0.53\mathrm{m}$ - B. The internal volume of the drum is $190\mathrm{L}=0.19\mathrm{m^{3}}$,​ which means that the steel has a volume of $0.01\mathrm{m^{3}}$ and thus a density of $1500 \mathrm{kg/​m^{3}}$. The average density of the barrel is then + B. (10 points) If the oil drum has fallen overboard and is floating upright in the sea, what length of the oil drum $x$ sticks up above the water? As a simplifying approximation you may assume that the all the steel is on the sides of the drum, and the top and bottom do not contribute to the mass of the drum. + The internal volume of the drum is $190\mathrm{L}=0.19\mathrm{m^{3}}$,​ which means that the steel has a volume of $0.01\mathrm{m^{3}}$ and thus a density of $1500 \mathrm{kg/​m^{3}}$. The average density of the barrel is then $\frac{1500\times0.01+900\times0.19}{0.01+0.19}=930\mathrm{kg/​m^{3}}$ $\frac{1500\times0.01+900\times0.19}{0.01+0.19}=930\mathrm{kg/​m^{3}}$ $x=0.063\mathrm{m}=6.3\mathrm{cm}$ $x=0.063\mathrm{m}=6.3\mathrm{cm}$ - C. $P=P_{atm}+\rho gh=1.013\times10^{5}+1000\times9.8\times0.837=1.095\times10^{5}\mathrm{Pa}$ + C. (5 points) What is the pressure at the bottom of the drum? You may assume standard atmospheric pressure for the air above the drum. + $P=P_{atm}+\rho gh=1.013\times10^{5}+1000\times9.8\times0.837=1.095\times10^{5}\mathrm{Pa}$ - A. $v_{sound}=343\mathrm{m\,​s^{-1}}$ + {{phy141f12finalq6fig.png}} + The speed of sound in air is $v=331+0.6T\,​ \mathrm{ms^{-1}}$ where $T$ is the temperature in $^{\circ}$C. Two organ pipes of length 0.6m which are open at both ends are used to produce sounds of slightly different frequencies by heating one of the tubes above the room temperature of 20$^{\circ}$C. + A. (5 points) What is the lowest frequency sound that can be produced in the room temperature pipe? + $v_{sound}=343\mathrm{m\,​s^{-1}}$ $f=\frac{v}{2l}=\frac{343}{2\times0.6}=285.8\mathrm{Hz}$ $f=\frac{v}{2l}=\frac{343}{2\times0.6}=285.8\mathrm{Hz}$ - B. See [[phy131studiof15:​lectures:​chapter18&#​open_and_closed_pipes|here]]. + B. (5 points) Draw on the figure the form of the standing wave amplitude for both displacement and pressure that produces this sound. + See [[phy131studiof15:​lectures:​chapter18&#​open_and_closed_pipes|here]]. + C. (5 points) To produce beats with a beat frequency ​ of 1Hz with the sound produced in (a) using the first harmonic of the heated pipe what should the temperature of the heated pipe be? - C. $\Delta f=1\mathrm{Hz}$ + $\Delta f=1\mathrm{Hz}$ $f_{T}=286.8\mathrm{Hz}$ $f_{T}=286.8\mathrm{Hz}$ $T=\frac{344.2-331}{0.6}=22\mathrm{^{o}C}$ $T=\frac{344.2-331}{0.6}=22\mathrm{^{o}C}$ - D. $f'​=\frac{v_{sound}+v_{obs}}{v_{sound}}f$ + D. (5 points) At what speed should a person running toward the tubes be moving so that the sound from the room temperature tube has an apparent frequency equal to the actual frequency of sound produced by the heated tube. + $f'​=\frac{v_{sound}+v_{obs}}{v_{sound}}f$ $286.8=\frac{343+v_{obs}}{343}285.8$ $286.8=\frac{343+v_{obs}}{343}285.8$ $v_{obs}=1.2\mathrm{m\,​s^{-1}}$ $v_{obs}=1.2\mathrm{m\,​s^{-1}}$ - E. $f=2\times285.8=571.6\mathrm{Hz}$ + E. (5 points) What is the frequency of the second harmonic that the room temperature pipe produces? + $f=2\times285.8=571.6\mathrm{Hz}$ - A. $e=\frac{W}{Q_{H}}$ + {{phy141f12finalq8fig.png}} + The diagram shows the P-V diagram for a 40% efficient ideal Carnot engine. Assume the gas used in this Carnot engine is an ideal diatomic gas. + A. (5 points) For every Joule of work obtained from the engine, how much heat needs to be added to engine? + $e=\frac{W}{Q_{H}}$ $Q_{H}=\frac{1}{0.4}=2.5\mathrm{J}$ $Q_{H}=\frac{1}{0.4}=2.5\mathrm{J}$ - B.$Q_{H}=W+Q_{L}$ + B. (b) (5 points) For every Joule of work obtained from the engine how much heat is lost to the environment?​ + $Q_{H}=W+Q_{L}$ $Q_{L}=1.5\mathrm{J}$ $Q_{L}=1.5\mathrm{J}$ - C. $P_{B}V_{B}=nRT_{H}$ + C. (5 points) At points B and D the gas in the system has the same volume, but different temperatures. If the gas at point D is at twice atmospheric pressure, what is the pressure of the gas at point B? + $P_{B}V_{B}=nRT_{H}$ $P_{D}V_{D}=nRT_{L}$ $P_{D}V_{D}=nRT_{L}$ $P_{B}=3.33P_{atm}$ $P_{B}=3.33P_{atm}$ - D. The expansion from B to C is adiabatic so $P_{B}V_{B}^{\gamma}=P_{C}V_{C}^{\gamma}$ + D. (5 points) If the volume of the gas at point B is 1L what is the volume of the gas at point C? + The expansion from B to C is adiabatic so $P_{B}V_{B}^{\gamma}=P_{C}V_{C}^{\gamma}$ For a diatomic gas $\gamma=\frac{7}{5}$ For a diatomic gas $\gamma=\frac{7}{5}$ $V_{C}=3.6\mathrm{L}$ $V_{C}=3.6\mathrm{L}$ - E. $\Delta S=0\mathrm{\frac{J}{K}}$ + E. (5 points) How much does the net entropy of the engine and the environment change for every Joule of work done by this Carnot engine? + $\Delta S=0\mathrm{\frac{J}{K}}$ phy131studiof15/lectures/finalp1sol.1449066223.txt · Last modified: 2015/12/02 09:23 by mdawber
CommonCrawl
MediaWiki math markup This page is oudated, see Help:Displaying a formula to see how to insert a formula in a wiki page. MediaWiki supports LaTeX markup: π = 3 4 3 + 24 ∫ 0 1 / 4 x − x 2 d x {\displaystyle \pi ={\frac {3}{4}}{\sqrt {3}}+24\int _{0}^{1/4}{{\sqrt {x-x^{2}}}dx}} cost = base × 2 level − 1 {\displaystyle {\text{cost}}={\text{base}}\times 2^{{\text{level}}-1}} Click here to try it out in the sandbox! Once in the sandbox: 1. Click the Mathematical Formula (LaTeX) icon above to get this: <math>Insert formula here</math> 2. Then enter any LaTeX markup (see Help:Formula), for example: <math>\alpha^2+\beta^2=1</math> ...displays α 2 + β 2 = 1 {\displaystyle \alpha ^{2}+\beta ^{2}=1} 1 Statement of the Situation (March, 2008) 2 texvc: Taw's proposal 3 ASCIIMath 5 Old stuff from FAQ talk 6 Vilage pump, Sept 2002 Statement of the Situation (March, 2008)[edit] Currently, mathematical equations are entered in LaTeX and displayed as images. A disadvantage of this is that the user cannot copy the content page and modify the equation. Ideally, a user should be able to create a mathematical formula with a word processor, copy and paste the formula into the MediWiki edit box, and have the formula rendered on the wiki in such a manner that a second user can copy the rendered content page, paste it into a word processor, and edit the formula, changing, say, the number of eggs per hen in the equation from 6 to 4. What's needed: The user needs to be able to easily enter mathematical equations. LaTeX is good for this. The content should be rendered as MathML. This is the W3 math display standard that browsers now understand. MathML is ugly; users should be shielded from it. Word processors need to be able to render MathML as an equation, and save the file with the equation retrievable. (When OpenOffice.Org Writer 2.4 saves a file in either .html or MediaWiki format, the MathML is replaced with an image, rather than LaTeX.) It would not be necessary for the MediaWiki edit box to accept MathML input if word processors made the LaTeX format readily available. The main thing remaining that MediaWiki can accomplish, then, is to render mathematical equations as MathML, rather than images, so that users can readily edit and reformat the equations. This will also reduce the file size. Cking 15:21, 30 March 2008 (UTC) texvc: Taw's proposal[edit] This is a culling from many other places that have raised this issue. Merciless refactoring of the text is required, to clearly present pros, cons and implementation possibilities The most recent idea seems to be something which takes markup like this: [[math:someTex]] [[math:\alpha^2+\beta^2=1]] and generates a PNG file which is then cached until the formula is next changed. ASCIIMath[edit] Has anyone taken a look at the PHP version of ASCIIMath (or even the original javascript version, which I think can also handle LaTex)? I like the fact that the simpler formulae are very simple. I was trying to see if I could create an extension using the PHP version, but was having problems as it seemed that I couldn't get PHP to serve up XML, which is apparently what MathML needs. --Sf-andrew 05:59, 18 August 2005 (UTC) Sage[edit] Excuse me, I do not know if I am really supposed to enter this, but looking at this page, I want to point you to the solution in Sage, a mathematica like open source python based project. They render latex well, can do images when need be or use actual fonts that can be cut and paste and you can right click and get the latex in the paste buffer. Hope this helps in terms of ways to display math. Personally I think it's cool mediawiki supports even the bitmap version, it looks quite nice. cheers. Old stuff from FAQ talk[edit] Q. I am asking this again, because I do not feel I received an adequate answer, last time. To write pages on Math, the authors need to be able to include a large number of math symbols, graphs, etc. This is not a question of linking to someone else's images. These are items we need to be able to use freely and in our own original way. It would require, at present, uploading the content to one's own websites, assuming we have them ,and linking to this. For example, to discuss differentiation and integration in Calculus, one needs to have: an Integral sign, a summmation sign, a symbolism for limits, etc. Somehow, writing, the limit of sin(x)/x as x goes to infinity, just doesn't cut it. Already we are all missing the small epsilon for "belongs to," the symbol for "is a subset of," and many more in the Abstract Algebra entries. I think this will impede the addition of much material in the Math area. A. This is a difficult problem. Part of the problem is that html web pages don't do math symbols very well at all. So almost any solution will be a crutch. Within the next couple of years, browsers should be able to render MathML. This will be a time of much joy. Between now and then, I'm open to suggestions, but I think this is a really hard problem. (Added note by Zephyr) The best way to put in math would probably be to write it in TeX (or LaTeX) and then use TtH. This will generate an HTML file with all of the math symbols in text form (no images). Here is an example, and here is another. TeX is almost inarguably the standard for mathematical writing. TtH produces beautiful HTML only equations, but they can't be inserted without being garbled. If this could be fixed, then TtH would definitely be a fabulous answer. -- TedDunning As much as I admire Knuth, I must disagree totally. MathML should be the ultimate standard, not TeX. For one thing, TeX is more display-oriented and MathML is more semantics-oriented. MathML is also the standard supported by the World Wide Web Consortium, and will therefore likely be supported by browsers (Mozilla and Microsoft are committed to it, for example). TeX is used by a few academics, but they don't represent the real world. Fooey on that. TeX is definitely only used by a few academics, but those few academics happen to include most of the mathematicians in the world. MathML is not usable as an authoring tool (and TtM exists for that anyway). TeX may be display-oriented, but few people actually write in TeX directly. Instead, they use LaTeX or AMSTeX which are both very semantics-oriented. In any case, I happen to live in the present and MathML is definitely in the future and may remain there indefinitely. If we can just figure out how to support the tables that TtH produces then the world be a lovely place (or at least the math pages in wikipedia). -- TedDunning It really comes down to WYSIWYG (What You See is What You Get) vs WYSIWYM (What You Say is What You Mean). First of all, I would like to say that TeX is used almost exclusively by anyone writing up math. Can you show me any major publication which does not require submissions to be in some variant of TeX? A popular thing about TeX is that (if you so wish) you do not need to worry about formatting, you can just say "this is a section, format it for me". You can still have this flexibility by setting the templates. Have you looked at MathML lately? I was looking at some MathML examples and it seems that TeX is much more straightforward to write. $x^2-4=2$ becomes <math xmlns="http://www.w3.org/1998/Math/MathML"> <mrow> <msup><mi>x</mi><mrow><mn>2</mn></mrow> </msup> <mo>-</mo><mn>4</mn><mo>=</mo><mn>2</mn></mrow></math>. I cannot speak much for this, everybody is entitled to their opinion, but I think that MathML would have a very tough time overthrowing TeX (although it would be good for the TeX to MathML convertor). Note: There is exactly such a converter from the same guy who wrote TtH -- TedDunning A minor but important correction to the above: It is LaTeX in what you in principle should not need to care about formatting, just concentrate on the content and structre. Plain TeX, on the other hand, is mostly purely formatting-oriented low-level typesetting language. Unfortunately, there are times in LaTeX when you do have to care about formatting. E.g. there are no tables that would a) work in paragraph mode and b) at the same time not require manually set the table width. But still, TeX math is the best way to write math, and I'd claim LaTeX the best way to write structured documents, despite it's flaws. The usual rule of "least worst" applies. WYSIWYG is never an alternative to me. Lout is an alternative, and has some advantages (e.g. =>, <=) but some major disadvantages: 'x_i^2' (the natural and TeX way to write this) vs. horribly undreadable 'x sub i supp 2'. Anyway, I think this project really needs a way to write math. Current text-emulation is horrible. I'd vote for using TeX math as it is perhaps the most readable and writable format, and most mathematicians know it. Something like LaTeX2HTML could then be used to render the formulas as images (as mathworld did...). When browsers decently support MathML, a converter from TeX to that format can be used. Whatever people might think, MathML is not the ultimate solution, because it is utterly unwritable. Note that HTML 4.0 contains many math symbols (all greek letters, integrals, summation signs, logical and set theoretic symbols etc.). I have had no problem typesetting math using these, and occasionly a bit of ascii art. So while it would be preferable to have some TeX or MathML mechanism in place, I don't think it's a big issue and the lack of it does not hold up the math pages right now. (If your browser does not support all HTML 4.0 entities, report it as a bug to the manufacturer.) --AxelBoldt Sorry, Alex, but that's just not helpful advice at all at this point. Most popular browsers in use today fail to support most of the symbols defined in HTML 4, with good reason (mainly the lack of those symbols in the native OS). It's also not a reportable bug (as if it were possible to report bugs in Microsoft products anyway:-), because HTML 4 doesn't require that they be rendered correctly. Some math articles would greatly benefit from typsetting features still not available in HTML 4. It is entirely appropriate for us to investigate other methods, and for the even more important reason that HTML won't allow us to save the semantic information about he formulas, only their appearance. --LDC Well, I'm told that the most popular browser, IE (version 5.5) displays all math and greek symbols from HTML 4.0 just fine. (I don't know if one has to download free fonts from Microsoft though.) And free software is usually much better at following the standards; mozilla and konqueror work fine as well. So which ones don't work? Also, I'm all in favor of investigating better math typesetting methods, but I claim that right now the math articles are not held back in any way. --AxelBoldt How about using EzMath in the pages (http://www.w3.org/People/Raggett/EzMath ), and having the wiki translate to MathML or images? It's very readable (and writable) as plain text. Sample expressions: "integral of sin x from 0 to 2pi wrt x", "x^2-4=2", "x squared - 4 = 2". [Note added by Serge Winitzki] Here is a compromise: you type math in TeX and see HTML and images (http://www.mathcircle.org/cgi-bin/mathwiki.pl?) Unfortunately, the (free) software for this "mathwiki" implementation is not in production shape right now. Please contact me if you would like to get in touch about this. (http://www.geocities.com/CapeCanaveral/Lab/5735/1/email_me.html) I intend to work on improving mathwiki code later this year (I am not the principal author). -- S.W. This is a very good solution since it is fast and completely portable among browsers. Getting the generated image font sizes to match up with the text would make this an excellent choice. The image description could use TeX so that Lynx users would find it useful as well. --Jonathan Jonathan, first, welcome to Wikipedia, and second, please stay on top of this. CliffordAdams is the main programmer for UseModWiki, but I'm not sure he's interested in including functionality of the sort you describe. But if he doesn't do it, I suspect we'll be able to interest other programmers to work on his code and do it. Anyway, our math section is very active and competent and there's definitely a crying need for this sort of feature. --LMS There is a patch for ZWiki called LaTeXWiki. It lets you type in LaTeX and ultimately renders it as a png grafic. The LaTeXWiki page is http://latexwiki.rootnode.com/ Vilage pump, Sept 2002[edit] This has probably come up before, so if I'm repeating tired old arguments, or worse still inadvertently walking into a minefield, apologies, but I think it would be nice to have a simpler, more wiki-ish markup for subscripts and superscripts, rather than relying on raw HTML (it was really starting to piss me off while doing the notation example in electron configuration). Something a bit TeXy, perhaps, like ^ and _ (though that leads to ambiguity over exactly what wants sub/superscripting so it might be best to use paired symbols ^2^ and _2_, or perhaps even ^2_ _2^, so that ^ meant go up a level and _ meant go down a level) --Bth It has come up before, but it's well worth mentioning again! Something like ^^2^^ might be safer. It's also been suggested that we could use TeX itself to generate PNG images, thus: [[math:some TeX expression in here]]. I like this idea a lot, even though I'm of the age of WYSIWYG and I fear TeX -- Tarquin 22:10 Sep 26, 2002 (UTC) The problem with the suggested syntaxes is that (one way or another) they don't allow scripts within scripts. My suggestion in the past has been "^{...}" and "_{...}" (which follows TeX even more closely) on the assumption that "^{" and "_{" will rarely or never occur naturally together. We could still double the "^" and "_" if that assumption proves mistaken, but we'll still need some sort of asymmetric bracketing; unlike other wiki syntax, scripts don't simply toggle. — Toby 10:32 Sep 29, 2002 (UTC) (But TeX is lovely ...) The thing is, nice though it would be to have some sort of swish automagic mathematics-generator, I think this is a slightly separate (though related) issue: sub/superscripts are possible within HTML and they occur frequently enough (and not always in mathematical contexts) that it'd be nice to be able to do them wiki-style. --Bth Yes, TeX is quite lovely; I use it all the time, even for writing letters to friends. After discussing possibilities for implementing it ad nauseam on the Wikipedia:mailing list, I eventually decided that it would only make editing harder for the uninitiated and thus shouldn't be done. But y'all may have fresh ideas that we didn't have before, so feel free to bring it up there again. — Toby 10:32 Sep 29, 2002 (UTC) I think we should bring it up again. TeX would only be used for fairly complex equations; so most people who will come face to face with it will be mathematicians of some sort. Those (like me) who don't know tex already can probably handle the learning curve. -- Tarquin 10:45 Sep 29, 2002 (UTC). I suspect that the time is right to think about allowing TeX markup, rendered as either HTML or MathML on the fly using either (latex2html, tex4ht,tth) and ttM - the selection being a user preference. MathML is at a critical stage where many of the newer browsers now support it. It's time web sites started delivering or we'll be here in 10 years time still discussing how to put maths on the web. The browser support issue is largely solved by linking pages to an XSLT stylesheet - see http://www.w3.org/Math/XSL/Overview-tech.html. Some font problems may remain for a time. On wikipedia, the biggest technical obstacle is probably the need to deliver XHTML rather than HTML in order to embed MathML properly - though a quick glance through some pages suggests that the HTML may convert easily. -- Gmp26 Summary of my opinion at this stage: Most math can be rendered in ASCII and the simple HTML that we support, and should be for maximum readability. Most of what we'd need rendered by LaTeX or the like and presented in an image format is diagrams, matrices, and other big displayed things. Since these often require extensions to LaTeX like Xypic, we shouldn't try to write our own LaTeX parser for what we think we'll need but instead allow people to use what they need for that particular diagram. Edit the LaTeX source to a diagram is much better for the wiki way than uploading new versions of an image. We don't really need to support special math markup inline, although there are some MathML features (like multiscripts) that it would be nice to use there. If we only call LaTeX for displayed equations and diagrams, then a natural wiki markup would be "$$" at the beginning of a line (and only there!). An ending "$$", natural for LaTeX but not for wiki, could be optional. If we're going to load packages using \usepackage, then we'll need some markup to indicate the packages used, since \usepackage must appear before the \begin {document} and \[ that precede the inputted source. — Toby Bartels 13:59 Dec 11, 2002 (UTC) Why is ONLY a small subset of the TeX commands implemented? No eqnarray, no \underrightarrow, no etc., etc., etc.128.175.112.224 I ran across the Wikipedia page for limit point, and noticed that my browser (IE6, the current most popular browser by far) displayed the HTML codes for the math symbols as plaintext. This was ugly and unreadable, so I changed them to use math tags. The author reverted my changes! I tried to discuss this with them but received no reply. While I'm certainly no fan of in-line images for symbols, in this case I think they're better than the alternative (strange codes appearing to 95% of viewers). Besides that, I'm sure the Wikipedia engine could easily convert these to HTML codes for users whose browsers support them, and even MathML one day for browsers supporting that. In other news, I have read a Microsoft website in which Microsoft insisted that they would never support MathML natively in IE, and plug-ins were good enough. I'm sorry I don't have a link right now, but if they don't change their mind about this, any MathML plans for the web could be shot. Dcoetzee In some shared sites LaTeX is not available, and so the texvc solution does not work. An alternative is to use mimetex, a on-fly generator. I have put it to work here: http://www.physcomments.org/wiki/ (please feel free to create some sandbox area if you want to test the idea). It needs, of course, a different math.php module. Lets try this d y d x = 2 x y + c {\displaystyle {\frac {dy}{dx}}=2xy+c} y = 2 x 2 y + c {\displaystyle \ y=2x^{2}y+c} y − 2 x 2 y = c {\displaystyle \ y-2x^{2}y=c} y ( 1 − 2 x 2 ) = c {\displaystyle \ y(1-2x^{2})=c} y = c / ( 1 − 2 x 2 ) {\displaystyle \ y={c}/{(1-2x^{2})}} Retrieved from "https://meta.wikimedia.org/w/index.php?title=MediaWiki_math_markup&oldid=2844243" MediaWiki archives
CommonCrawl
Karst tiankengs as refugia for indigenous tree flora amidst a degraded landscape in southwestern China Trees represent community composition of other plant life-forms, but not their diversity, abundance or responses to fragmentation Bonifacio O. Pasion, Mareike Roeder, … Kyle W. Tomlinson Species composition and diversity of ground bryophytes across a forest edge-to-interior gradient Tiantian Jiang, Xuecheng Yang, … Zhiyao Su Effects of disturbances by forest elephants on diversity of trees and insects in tropical rainforests on Mount Cameroon Vincent Maicher, Sylvain Delabye, … Robert Tropek Amazon tree dominance across forest strata Frederick C. Draper, Flavia R. C. Costa, … Christopher Baraloto Assessing assemblage-wide mammal responses to different types of habitat modification in Amazonian forests Paula C. R. Almeida-Maués, Anderson S. Bueno, … Ana Cristina Mendes-Oliveira Contribution of conspecific negative density dependence to species diversity is increasing towards low environmental limitation in Japanese forests Pavel Fibich, Masae I. Ishihara, … Jan Altman History and environment shape species pools and community diversity in European beech forests Borja Jiménez-Alfaro, Marco Girardello, … Thomas Wohlgemuth Partitioning the regional and local drivers of phylogenetic and functional diversity along temperate elevational gradients on an East Asian peninsula Jung-Hwa Chun & Chang-Bae Lee Coordinated community structure among trees, fungi and invertebrate groups in Amazonian rainforests Jason Vleminckx, Heidy Schimann, … Christopher Baraloto Yuqiao Su1, Qiming Tang1, Fuyan Mo1 & Yuegui Xue1 We conducted floristic and community analyses to compare the floristic composition, forest structure, taxonomic richness, and species diversity between two tiankeng (large doline, or sinkhole) habitats and two outside-tiankeng habitats of forest fragments in a degraded karst area in southwestern China. We found remarkably higher taxonomic richness in the tiankeng habitats than in the outside-tiankeng habitats at the species, generic, and familial levels. The inside-tiankeng habitats had higher floristic diversity but lower dominance. The remarkably higher uniqueness at all taxonomic levels and the much larger tree size in the two tiankeng habitats than in the outside-tiankeng habitats demonstrated the old-growth and isolated nature of the tiankeng flora. Plot-scale species richness, Shannon-Wiener index, Pielou's evenness, and Berger-Parker dominance significantly differed across habitats. Heterogeneity in floristic composition at the species, generic, and familial levels was extremely significant across habitats. In pairwise comparisons, except for the Chuandong Tiankeng-Shenmu Tiankeng pair, all the pairs showed significant between-habitat heterogeneity in floristic composition. Our results suggest that as oases amidst the degraded karst landscape, tiankengs serve as modern refugia that preserve old-growth forest communities with their rich floristic diversity, and can provide a model for habitat conservation and forest restoration in that area. Biodiversity loss, habitat fragmentation, and land degradation characterised by soil erosion, lower fertility, and rocky desertification are the typical features of mountainous karst landscapes1,2,3. The karst landforms are widespread but with restricted distributions in the world4, 5. While the karst landforms are associated with splendid landscapes and picturesque views, they are commonly regarded as an ecologically vulnerable system6, 7. In China, karst landforms occur mainly in southwestern China's Guangxi, Guizhou, and Yunnan provinces. During the past twenty years, a geographic wonder called tiankeng has been discovered in the karst areas of southwestern China, typically in Leye county of Guangxi, and revealed to the outside world. A tiankeng is a unique type of large doline, or sinkhole, as defined in Chinese and English literature8,9,10,11,12, and it can be translated literally from the Chinese as heavenly pit. The rapidly developing tourism industry and worldwide cave exploration efforts have driven the discovery of karst tiankengs. Current, although limited, investigations from excursions to this area have revealed that the tiankengs contain pristine old-growth forest8, 13, thus making them oases amidst the degraded karst landscape. We hypothesised that these tiankengs might serve as refugia for the once rich biotic diversity in this area. Investigating the floristic composition and structure of forest communities that occur in special landforms and terrains, such as tiankengs, or dolines, will help elucidate the historical distribution of plant communities and their succession, shift with environmental change, and plant phylogeography. Dolines in different parts of the world are morphologically different and may have different names10, 12. While the large collapsed doline in southwestern China has been given a term called tiankeng, a type of collapsed doline in Mexico is called Rejolladas14. Similarly, dolines are called Japage in Croatia15 and Minyé in Papua New Guinea11. Recent studies indicated that dolines in Hungary play a key role in preserving different groups of species, and many endangered vascular plant species occur there16,17,18,19. Several other studies on the vegetation and environment relations also suggested that dolines are rich in plant and animal lives20,21,22 and some reported new taxa found in dolines23, 24. As a special type of negative terrain, a tiankeng is a large collapsed doline at least 100 m deep and wide, commonly surrounded by vertical cliffs and inaccessible to humans12. The pristine forest inside a tiankeng mirrors the natural geological and ecological processes in the region. Due to tiankengs' inaccessibility, the studies that have been conducted are limited to general scientific surveys, geological investigations, and touristic explorations12, 25. A few studies on plant resources, Chinese traditional herbs13, 26, and soil pollution monitoring27,28,29 have recently been reported, but we know little of the composition, diversity, and structure of the "underground" vegetation inside the tiankengs. Our preliminary report on the forest community structure and diversity in Liuxing Tiankeng from this area8 may be the first published report on the ecological analysis of the forest community inside a tiankeng. A good understanding of the forest communities that have been preserved in the tiankeng habitats will also help in the design of conservation strategies. The flora and forest community of tiankengs provide a model of a species pool for protecting the extant fragmented vegetation and restoring a species-rich forest ecosystem in the outside-tiankeng degraded landscape. In this study, we aimed to reveal the floristic composition and community structure of the tiankeng pristine forest compared to those of the outside-tiankeng forest fragments. To achieve this goal, we compared the floristic composition, taxonomic richness, tree size, and species diversity between two tiankeng habitats and two outside-tiankeng habitats in forest fragments of a degraded karst area in southwestern China. Floristic richness and abundance For tree stems ≥ 10 cm diameter at breast height (DBH), we recorded a total of 933 tree individuals of 96 species from 66 genera and 38 families in all four habitats (Supplementary Table 1; see the Methods section for the definition of the four habitats.). We found contrasting patterns in species composition and higher-taxa composition. The two tiankeng habitats had remarkably higher taxonomic richness at the species, generic, and familial levels than the two outside-tiankeng habitats. The species richness in the four habitats, Chuandong Tiankeng, Shenmu Tiankeng, Forest Remnants, and Fengshui Woods, was 46, 55, 23, and 29, respectively; the generic richness was 39, 44, 22, and 27, respectively; and the familial richness was 23, 26, 14, 22, respectively (Figs 1–3). The most dominant species in the four habitats included 21, 20, 63, and 68 tree individuals, respectively; the most dominant genus included 21, 29, 63, and 68 tree individuals, respectively; and the most dominant family had 68, 76, 84, and 71 individuals, respectively (Figs 1–3). The rank-abundance curves (diversity-dominance curves) for the two outside-tiankeng habitats were much steeper than those for the inside-tiankeng habitats, indicating that the outside-tiankeng habitats had higher floristic dominance but lower floristic diversity and evenness. Rank abundance curves by species, showing number of species across habitats and the abundance and evenness patterns. Rank abundance curves by genus, showing number of genera across habitats and the abundance and evenness patterns. Rank abundance curves by family, showing number of families across habitats and the abundance and evenness patterns. Shared and unique taxa A comparison of the shared and unique taxa between the tiankeng habitat type and the outside-tiankeng habitat type demonstrated the unique and ancient nature of the tiankeng flora. The inside-tiankeng habitats shared 19 families, 22 genera, and 13 species with the outside-tiankeng habitats. While 13 families, 31 genera, and 56 species, representing 41%, 58%, and 81%, respectively, of the total number of taxa occurring in the tiankeng habitats, were unique, only 4 families, 15 genera, and 27 species, accounting for 16%, 40%, and 68%, respectively, of the total number of taxa occurring in the outside-tiankeng habitats, were unique to the outside-tiankeng habitats (Table 1; see also Fig. 4 and Supplementary Figure S1). The remarkably higher uniqueness of the inside-tiankeng flora at all the taxonomic levels further demonstrated the primitive and isolated nature of the tiankeng flora and the degradation of the outside-tiankeng habitats. Table 1 Floristic composition and the shared and unique taxa between inside-tiankeng and outside-tiankeng habitats. Two-way cluster dendrograms showing habitat associations and the groupings of similar floristic distribution at the levels of family (a) and Genus (b). Habitat code: CDTK = Chuandong Tiankeng; SMTK = Shenmu Tiankeng; FORE = Forest Remnants; FENG = Fengshui Woods. Tree size structure The inside-tiankeng habitats had many more large trees than the outside-tiankeng habitats. The largest tree occurring in each of the four habitats, Chuandong Tiankeng, Shenmu Tiankeng, Forest Remnants, and Fengshui Woods, had a maximum DBH of 76.0 cm, 85.0 cm, 41.2 cm, and 57.0 cm, respectively (Fig. 5). The number of large trees found in the tiankeng habitats was much greater than the number found in the outside-tiankeng habitats. The inside-tiankeng habitats had 267 large adult trees (DBH ≥ 30.0 cm), accounting for 54% of the total number of tree individuals occurring in the sampling plots there, whereas only 45 trees of such size were found in the sampling plots of the outside-tiankeng habitats, accounting for 10% of the total number of tree individuals occurring there (Fig. 5). The contrast in tree size structure between the inside-tiankeng and outside-tiankeng forest communities again showed the old-growth and isolated nature of the inside-tiankeng habitats. Histograms showing tree size distribution across habitats in the karst tiankeng landscape. We found significant differences in plot-scale species richness (Kruskal-Wallis test, P = 0.0022), Shannon-Wiener index (Kruskal-Wallis test, P = 0.0008), Pielou's evenness (Kruskal-Wallis test, P = 0.0005), and Berger-Parker dominance (Kruskal-Wallis test, P = 0.0006) across habitats (Fig. 6). Species richness and the Shannon-Wiener index showed similar patterns, and the inside-tiankeng habitats had significantly higher overall species richness and diversity than the outside-tiankeng habitats. Pielou's evenness showed different patterns from Berger-Parker dominance. Trees had a more even distribution with much smaller ranges in the forest communities of Chuandong Tiankeng and Shenmu Tiankeng than in the Forest Remnants and Fengshui Woods communities (Fig. 6c). Plot-scale species richness and diversity across habitats in a karst tiankeng landscape. Habitat code: 1 = Chuandong Tiankeng; 2 = Shenmu Tiankeng; 3 = Forest Remnants; 4 = Fengshui Woods. Habitat heterogeneity Multi-response permutation procedures (MRPP) presented overall and pairwise tests for significance of variation in taxonomic composition across habitats. By overall comparison, we found extremely significant differences in floristic composition across habitats at the familial, generic, and species levels (MRPP, P < 10−5). The highest within-habitat homogeneity (MRPP, A = 0.102) and between-habitat heterogeneity (MRPP, T = −11.260) across habitats existed at the species level (Table 2). By pairwise comparison, all the pairs except the pair of Chuandong Tiankeng and Shenmu Tiankeng showed significant differences. Floristic composition did not differ between the Chuandong Tiankeng and Shenmu Tiankeng habitats at the familial (MRPP, P = 0.082), generic (MRPP, P = 0.094), and species (MRPP, P = 0.070) levels (Table 2). Table 2 Multi-response permutation procedures (MRPP) for the taxonomic composition across habitats. The within-habitat homogeneity and between-habitat heterogeneity across habitats were confirmed by the results from two-way cluster analysis at the familial, generic, and species levels. The two-way cluster dendrograms (Fig. 4 and Supplementary Figure S1) showed habitat associations with the taxonomic entities. The taxonomic entities with similar habitat specificity were clustered in a group with close membership, while the habitats associated with similar floristic distributions were grouped together. At all the taxonomic levels, Chuandong Tiankeng and Shenmu Tiankeng were grouped together with 100% information remaining. The Forest Remnants habitat as a close branch was fused into the Chuandong Tiankeng and Shenmu Tiankeng cluster at the familial and generic levels (Fig. 4), while at the species level, the Forest Remnant and Fengshui Woods were first grouped together into a cluster with 65% information remaining; then, this cluster was fused into a single group with the Chuandong Tiankeng and Shenmu Tiankeng cluster (Supplementary Figure S1). Restoring the vegetation in the degraded karst areas is a major challenge for biodiversity conservation and regional development30, 31, since these areas have undergone longtime environmental degradation and biodiversity loss due to geological and climatic processes, as well as intense human activities32, 33. However, the recent discoveries of tiankengs (large dolines) in the karst area of southwestern China have shed light on this goal, because tiankeng habitats may have preserved the intact vegetation from a long–ago period. In this study, we conducted floristic and community analyses to compare the floristic richness, species diversity and forest structure between the subterranean tiankeng habitats and the outside-tiankeng Forest Remnants and Fengshui Woods using a plot sampling method. We found remarkably higher taxonomic richness in the tiankeng habitats than in the two outside-tiankeng habitats at the species, generic, and familial levels. The rank-abundance curves for the two outside-tiankeng habitats were much steeper than those for the inside-tiankeng habitats, indicating that the inside-tiankeng habitats had higher floristic diversity and evenness but lower dominance. The remarkably higher floristic uniqueness at all the taxonomic levels inside the tiankengs, compared to the outside-tiankeng habitats, demonstrated the primitive and isolated nature of the tiankeng flora and the degradation of the outside-tiankeng habitats. Tree size distribution is the fundamental forest stand structure from which we can judge or model forest age and growth status34,35,36. The contrasting tree size structure between the inside-tiankeng and the outside-tiankeng forest communities again showed the old-growth and isolated nature of the inside-tiankeng habitats. Community analyses showed changes in species diversity attributes at the plot scale across habitats. We found significant differences in the plot-scale species richness, Shannon-Wiener index, Pielou's evenness, and Berger-Parker dominance across habitats. MRPP indicated that heterogeneity in floristic composition at the species, generic, and familial levels was extremely significant for overall comparison across habitats. For pairwise comparison, except for the Chuandong Tiankeng-Shenmu Tiankeng pair, all the pairs showed significant between-habitat heterogeneity in floristic composition. Our results suggest that the tiankeng habitats serve as modern refugia that preserve old-growth forest communities with higher floristic diversity and predominantly much older trees than the degraded outside-tiankeng landscapes. A refugium refers to an area in which climate and vegetation have remained relatively unchanged while areas surrounding it have changed markedly37. In contrast to the arid, degraded landscape outside the tiankengs, the humid microclimate inside them sustained a rich flora because of the underground river system in the cone karst area10, 27 and the relatively isolated habitat deep inside the earth. Unlike the traditional refugium concept, which describes plant diversity as having survived from the widespread Pleistocene glaciers on the earth37, tiankengs are oases amidst the degraded karst landscape. They preserve the modern flora, with its rich diversity at a local or regional scale, and thus serve as modern refugia that protect the indigenous flora from the land degradation and biodiversity loss caused by karstification and human disturbance. Although the term tiankeng has entered the international karst lexicon, there is still controversy about the relationship between tiankeng and doline38, 39. The geographical differences in the relationships between the vegetation and the morphogenesis of tiankengs and dolines are still to be explored, in order to determine the mechanisms driving tiankengs' functionality as the refugia for the local biota. Our study of the tiankeng flora and forest communities in the karst areas in southwestern China will provide an important case study for the international karst and speleology research, biodiversity conservation in karst areas, and related biological resource studies. The indigenous tree flora and vegetation in the area have been ruined and fragmented by longtime natural processes such as geological and climatic events as well as intense recent human activities. However, down in the tiankengs, the pristine forest with its rich diversity and old-growth structure has been preserved, thus providing a genetic pool of the indigenous flora and possibilities for scientific research and revegetation in the degraded karst areas. The tiankengs not only are the refugia for the richly diverse indigenous biota, but also serve as tourist attractions for the outside world. Tiankengs in Leye, Guangxi Zhuang Autonomous Region, have been discovered and revealed to the outside world only in the past 20 years, but they have attracted attention worldwide. The rapidly developing tourism industry has taken advantage of this area for its tourist value and made it a well-known tourist destination. Tourist activities in this area have created new perils for the tiankeng ecosystem; therefore, conservation efforts in this area should prioritise tiankeng habitats and their rich biodiversity. Our study area was located in Leye county of Guangxi Zhuang Autonomous Region, southwestern China (106°10′–106°51′E, 24°30′–25°03′N). This area is on the southeast part of the Yunnan-Guizhou Plateau, which has a land surface characterised by cone karst terrain10. The area has a mid-subtropical monsoon climate with contrasting wet and dry seasons over the course of the year, a mean annual temperature of 16.6 °C and a mean annual precipitation of 1,400 mm8. Sampling design and plant census Two inside-tiankeng habitats and two outside-tiankeng habitats were selected for our study. Chuandong Tiankeng (coded as CDTK) has an average depth of 175 m and a maximum depth of 312 m, with a horizontal dimension of 270 m × 370 m at the mouth, which opens onto a mountaintop11. Shenmu Tiankeng (coded as SMTK) has an average depth of 186 m and a maximum depth of 234 m, with a horizontal dimension of 340 m × 370 m at the mouth, which opens into the valley of the cone karst landscape11. Outside the tiankengs on the nearby land surface, no continuous forest vegetation was found, only sporadic forest fragments. These forest fragments fall into two categories: one with frequent human interventions, such as harvesting fuelwoods, medicinal herbs, wild vegetables, and other forest by-products, as well as natural disturbances from rocky desertification, and the other, commonly referred to as Fengshui Woods, found near human settlements with intentional and rigorous protection for shading, windbreak, and/or sacred purposes40, 41. In our study, the first category of forest fragments was coded as FORE (Forest Remnants), and the other was coded as FENG (Fengshui Woods). Thus, two inside-tiankeng habitats and two outside-tiankeng habitats were defined for our study. Before we established plots for plant census, we made a three-day preliminary excursion to Chuandong Tiankeng, Shenmu Tiankeng, and the nearby outside-tiankeng land surface locations to gain a general knowledge of the forest communities both inside and outside the tiankengs. The preliminary survey focused on the habitat conditions and forest community physiognomy and helped locate the forested sites in each habitat for plot sampling. In each of the four habitats, we established five sampling plots, each containing two contiguous 20 m × 20 m quadrats for the census of trees. Such a plot dimension for tree sampling is appropriate and can be used in all the four habitats to make field data comparable, because some of the outside-tiankeng forest fragments are too small to accommodate more than two contiguous 20 m × 20 m quadrats. All trees ≥ 10 cm diameter at breast height (DBH) were tallied; tagged; identified to species; and recorded by species name, DBH, and tree height. DBH was measured with a diameter tape to the nearest 0.1 cm. Tree height was measured to the nearest 0.1 m with a measuring rod for trees ≤ 6.5 m and a clinometer (Suunto PM-5/1520, Finland) for trees > 6.5 m42, 43. Tree height data were not used in this study. The plant taxonomy and systematics follow the Checklist of Guangxi Plants 44. Many ecological studies focus on analyses at the species level. Here, apart from species analysis, we extended the statistical applications to higher-taxa analyses to comprise the generic and familial levels, because we intended to extract floristic patterns by comparing the forest communities of the inside-tiankeng habitats and the outside-tiankeng habitats. We constructed rank-abundance curves to elucidate floristic diversity and dominance patterns across habitats at the species, generic, and familial levels. The rank-abundance curve, also called the Whittaker plot, is a common statistical technique to plot abundance (or relative abundance) against an ordered taxonomic entity, which is ranked from the highest to the lowest abundance45, 46. To visualize habitat association with taxonomic entities at the familial, generic, and species levels, we performed two-way cluster analysis on the plot × family, plot × genus, and plot × species datasets of abundance, using group average method with Bray-Curtis distance47, 48, to show habitat relationships and groupings of taxonomic entities with similar distribution patterns. Two-way cluster analysis simultaneously classifies habitats and visually depicts the ecological similarities or differences in or between clusters of taxonomic entities47, 48. Interspecific association can also be observed from species presence/absence distribution in a particular habitat or habitat type in the dendrogram generated by two-way cluster analysis. To contrast differences in tree size across habitats, we plotted histograms of tree DBH by habitat. The histograms show the full spectra of the tree DBH distribution and give an impressive representation of the approximate stand age of each habitat, since DBH at community level is a good proxy variable for stand age, given similar forest types (e.g., all are natural hardwood forest.)34, 35, 49. To assess differences in community patterns across habitats, we performed the Kruskal-Wallis test, a nonparametric alternative to analysis of variance (ANOVA), to test for significance of differences in plot-scale species richness, Shannon-Wiener diversity index, Pielou's evenness, and Berger-Parker dominance. While species richness was represented by the number of species in a quadrat, Shannon-Wiener index, Pielou's evenness, and Berger-Parker dominance were calculated using the following equations46, respectively: $$H^{\prime} =-\sum {P}_{i}\,\mathrm{ln}\,{P}_{i}$$ $$E=H^{\prime} /\mathrm{ln}\,S$$ $${D}_{B-P}={N}_{{\rm{\max }}}/N$$ where H′ is the Shannon-Wiener index; E is Pielou's evenness; D B−P is Berger-Parker dominance; S is the number of species; N is the total number of individuals; N max is the number of individuals of the most abundant species; and P i is the relative abundance of the ith species, calculated as P i = n i /N. To evaluate variations in floristic composition at the familial, generic, and species levels across habitats, we performed multi-response permutation procedures (MRPP) on the plot × family, plot × genus, and plot × species datasets of abundance across habitats and made pairwise comparisons between habitats. MRPP is a nonparametric procedure for testing the hypothesis of no difference between two or more a priori groups of entities47, 48. The output of MRPP includes three major statistics: T, A, and P-value. T is a test statistic that describes the separation between groups, while A is a descriptor of within-group homogeneity, known as the "effect size", or chance-corrected within-group agreement47, 48. The rank-abundance curve analysis and the Kruskal-Wallis test were carried out with STATISTICA data analysis software system, version 8.0 (Statsoft, Inc. Tulsa, OK, USA), while two-way cluster analysis and MRPP were performed with PC-ORD, multivariate analysis of ecological data, version 6.0 (MjM Software, Gleneden Beach, OR, USA). The datasets generated during the current study are available from the corresponding author on reasonable request. Kovacic, G. & Ravbar, N. Analysis of human induced changes in a karst landscape - the filling of dolines in the Kras plateau, Slovenia. Sci Total Environ 447, 143–151 (2013). Liu, C. C. et al. Mixing litter from deciduous and evergreen trees enhances decomposition in a subtropical karst forest in southwestern China. Soil Biol Biochem 101, 44–54 (2016). Pipan, T. & Culver, D. C. Forty years of epikarst: what biology have we learned? Int J Speleol 42, 215–223 (2013). Day, M. Challenges to sustainability in the Caribbean karst. Geol Croat 63, 149–154 (2010). Janos, M. et al. Hazards and landscape changes (degradations) on Hungarian karst mountains due to natural and human effects. J Mt Sci-Engl 10, 16–28 (2013). Breg, M. Degradation of dolines on Logasko polje (Slovenia). Acta Carsologica 36, 223–231 (2007). Calo, F. & Parise, M. Evaluating the human disturbance to karst environments in southern Italy. Acta Carsologica 35, 47–56 (2006). Su, Y., Xue, Y., Fan, B., Mo, F. & Feng, H. Plant community structure and species diversity in Liuxing Tiankeng of Guangxi. Acta Botanica Boreali-Occidentalia Sinica 36, 2300–2306 (2016). Theodore, O. I. et al. Distribution of polycyclic aromatic hydrocarbons in Datuo karst Tiankeng of South China. Environ Geochem Hlth 30, 423–429 (2008). Waltham, T. Fengcong, fenglin, cone karst and tower karst. Cave and Karst Science 35, 77–88 (2008). Zhu, X. & Chen, W. Tiankengs in the karst of China. Speleogenesis and Evolution of Karst Aquifers 4, 1–18 (2006). Zhu, X. & Waltham, T. Tiankeng: Definition and description. Cave and Karst Science 32, 75–79 (2005). Huang, K. & Li, Q. Plant field investigation and the ecological protection of tiankeng scenic area. Journal of Baise University 22, 75–77 (2009). Munro-Stasiuk, M. J., Manahan, T. K., Stockton, T. & Ardren, T. Spatial and physical characteristics of Rejolladas in northern Yucatan, Mexico: Implications for ancient Maya agriculture and settlement patterns. Geoarchaeology 29, 156–172 (2014). Buzjak, N., Buzjak, S. & Oresic, D. Floristic, microclimatic and geomorphological features of collapsed doline Japage on the Zumberak (Croatia). Sumar List 135, 127–137 (2011). Batori, Z. et al. Vegetation of the dolines in Mecsek mountains (South Hungary) in relation to the local plant communities. Acta Carsologica 38, 237–252 (2009). Batori, Z. et al. The conservation value of karst dolines for vascular plants in woodland habitats of Hungary: refugia and climate change. Int J Speleol 43, 15–26 (2014). Batori, Z. et al. A comparison of the vegetation of forested and non-forested solution dolines in Hungary: a preliminary study. Biologia 69, 1339–1348 (2014). Batori, Z., Galle, R., Erdos, L. & Kormoczi, L. Ecological conditions, flora and vegetation of a large doline in the Mecsek Mountains (South Hungary). Acta Bot Croat 70, 147–155 (2011). Bergmeier, E. The vegetation of the high mountains of Crete - a revision and multivariate analysis. Phytocoenologia 32, 205–249 (2002). Raschmanova, N., Miklisova, D., Kovac, L. & Sustr, V. Community composition and cold tolerance of soil Collembola in a collapse karst doline with strong microclimate inversion. Biologia 70, 802–811 (2015). Vukelic, J., Mikac, S., Baricevic, D., Sapic, I. & Baksic, D. Vegetation and structural features of Norway spruce stands (Picea abies Karst.) in the virgin forest of Smrceve Doline in Northern Velebit. Croat J for Eng 32, 73–86 (2011). Takeuchi, W. Additional notes on Psychotria (Rubiaceae) from the southern karst of Papua New Guinea: P. defretesiana comb. et stat. nov., P. dieniensis var. banakii var. nov., and P. stevedarwiniana sp. nov. Phytotaxa 24, 19–27 (2011). Takeuchi, W. Additions to the flora of the southern mountains of Papua New Guinea: Begonia chambersiae sp nov (Begoniaceae), Kibara renneriae sp nov (Monimiaceae), and distributional records of four rarely seen taxa. Phytotaxa 52, 43–53 (2012). Kejun, D. et al. Methodological study on exposure date of Tiankeng by AMS measurement of in situ produced cosmogenic Cl-36. Nucl Instrum Meth B 294, 611–615 (2013). Su, S. Medical Pteridophyta resources investigation in the area of Dashiwei Tiankeng Group of Leye County. Hubei Agricultural Science 51, 5376–5380 (2012). Kong, X. S., Luan, R. J., Miao, Y., Qi, S. H. & Li, F. Polycyclic aromatic hydrocarbons in sediment cores from the Dashiwei Tiankeng reach in the Bailang underground river, South China. Environ Earth Sci 73, 5535–5543 (2015). Kong, X. S., Qi, S. H., It, O. & Zhang, Y. H. Distribution of organochlorine pesticides in soils of Dashiwei Karst Tiankeng (giant doline) area in South China. Environ Earth Sci 70, 549–558 (2013). Wang, Y. H. et al. Distribution and potential sources of organochlorine pesticides in the karst soils of a tiankeng in southwest China. Environ Earth Sci 70, 2873–2881 (2013). Duan, J., Li, Y. H. & Huang, J. An assessment of conservation effects in Shilin Karst of South China Karst. Environ Earth Sci 68, 821–832 (2013). Silva, M. S., Martins, R. P. & Ferreira, R. L. Cave conservation priority index to adopt a rapid protection strategy: A case study in Brazilian Atlantic rain forest. Environ Manage 55, 279–295 (2015). Gutierrrez, F. et al. Geological and environmental implications of the evaporite karst in Spain. Environ Geol 53, 951–965 (2008). Toure, D., Ge, J. & Zhou, J. Spatial patterns of tree species number in relationship with the local environmental variations in karst ecosystem. Appl Ecol Env Res 13, 1035–1054 (2015). Beltran, H. A., Chauchard, L., Velasquez, A., Sbrancia, R. & Pastur, G. M. Diametric site index: an alternative method to estimate site quality in Nothofagus obliqua and N-alpina forests. Cerne 22, 345–354 (2016). Franceschini, T., Martin-Ducup, O. & Schneider, R. Allometric exponents as a tool to study the influence of climate on the trade-off between primary and secondary growth in major north-eastern American tree species. Ann Bot-London 117, 551–563 (2016). Gao, X. et al. Modeling of the height-diameter relationship using an allometric equation model: a case study of stands of Phyllostachys edulis. J Forestry Res 27, 339–347 (2016). Lomolino, M. V., Riddle, B. R. & Brown, J. H. Biogeography. (Sinauer Associates, Inc., 2006). Gunn, J. Turloughs and tiankengs: Distinctive doline forms. Cave and Karst Science 32, 83–84 (2005). Pu, G. Z., Lv, Y. N., Xu, G. P., Zeng, D. J. & Huang, Y. Q. Research progress on karst tiankeng ecosystems. Bot Rev 83, 5–37 (2017). Jim, C. Y. Conservation of soils in culturally protected woodlands in rural Hong Kong. Forest Ecol Manag 175, 339–353 (2003). Yip, J. Y., Corlett, R. T. & Dudgeon, D. A fine-scale gap analysis of the existing protected area system in Hong Kong, China. Biodivers Conserv 13, 943–957 (2004). He, S. Y. et al. Topography-associated thermal gradient predicts warming effects on woody plant structural diversity in a subtropical forest. Sci Rep-Uk 7, 40387 (2017). Hu, Y. Q., Su, Z. Y., Li, W. B., Li, J. P. & Ke, X. D. Influence of tree species composition and community structure on carbon density in a subtropical forest. Plos One 10, e136984 (2015). Qin, H. & Liu, Y. Checklist of Guangxi Plants. (Science Press, 2010). Ingram, J. C., Whittaker, R. J. & Dawson, T. P. Tree structure and diversity in human-impacted littoral forests, Madagascar. Environ Manage 35, 779–798 (2005). Magurran, A. E. & McGill, B. J. Biological Diversity: Frontiers in measurement and assessment. (Oxford University Press, 2011). McCune, B., Grace, J. B. & Urban, D. L. Analysis of Ecological Communities. (MjM Software Design, 2002). Peck, J. E. Multivariate Analysis for Community Ecologists: Step-by-Step using PC-ORD. (MjM Software Design, 2010). Gazda, A. & Miscicki, S. Forecast of the development of natural forest resources using a size-class growth model. Sylwan 160, 207–218 (2016). We thank Mingfeng Xu, Yongqiang Wang, Yi Zhang, and Yixing Luo for their help with the field survey. This research is funded by a grant from the National Natural Science Foundation of China (Grant No. 31260113) to Yuegui Xue. College of Life Sciences, Guangxi Normal University, Guilin, 541006, Guangxi, China Yuqiao Su, Qiming Tang, Fuyan Mo & Yuegui Xue Yuqiao Su Qiming Tang Fuyan Mo Yuegui Xue Y.X. and Y.S. conceived and designed the study. Y.S., Q.T. and F.M. performed field surveys and collected the data. Y.X. and Q.T. identified the plants. Y.S. analysed the data, prepared the figures, and wrote the first draft. Y.S. and Y.X. revised the draft and completed the final manuscript. All the authors reviewed and approved the manuscript. Correspondence to Yuegui Xue. Supplementary Table S1 & Figure S1 Su, Y., Tang, Q., Mo, F. et al. Karst tiankengs as refugia for indigenous tree flora amidst a degraded landscape in southwestern China. Sci Rep 7, 4249 (2017). https://doi.org/10.1038/s41598-017-04592-x Metagenomic analysis reveals the different characteristics of microbial communities inside and outside the karst tiankeng Cong Jiang Xiao-Rui Sun Wei Shui BMC Microbiology (2022) Slope aspect affects the soil microbial communities in karst tiankeng negative landforms Sufeng Zhu BMC Ecology and Evolution (2022) Karst tiankeng shapes the differential composition and structure of bacterial and fungal communities in karst land Xiang Sun Environmental Science and Pollution Research (2022) Identification and Potential of Newly Emerging Geoheritage Karst Areas South of Hanzhong, Central China Michal Filippi Yuanhai Zhang Junliang Zhang Anthropogenic disturbances alter the conservation value of karst dolines Zoltán Bátori András Vojtkó Mateja Breg Valjavec Biodiversity and Conservation (2020)
CommonCrawl
Granular Gas of Inelastic and Rough Maxwell Particles Gilberto M. Kremer ORCID: orcid.org/0000-0001-6067-57101 na1 & Andrés Santos ORCID: orcid.org/0000-0002-9564-51802 na1 Journal of Statistical Physics volume 189, Article number: 23 (2022) Cite this article The most widely used model for granular gases is perhaps the inelastic hard-sphere model (IHSM), where the grains are assumed to be perfectly smooth spheres colliding with a constant coefficient of normal restitution. A much more tractable model is the inelastic Maxwell model (IMM), in which the velocity-dependent collision rate is replaced by an effective mean-field constant. This simplification has been taken advantage of by many researchers to find a number of exact results within the IMM. On the other hand, both the IHSM and IMM neglect the impact of roughness—generally present in real grains—on the dynamic properties of a granular gas. This is remedied by the inelastic rough hard-sphere model (IRHSM), where, apart from the coefficient of normal restitution, a constant coefficient of tangential restitution is introduced. In parallel to the simplification carried out when going from the IHSM to the IMM, we propose in this paper an inelastic rough Maxwell model (IRMM) as a simplification of the IRHSM. The tractability of the proposed model is illustrated by the exact evaluation of the collisional moments of first and second degree, and the most relevant ones of third and fourth degree. The results are applied to the evaluation of the rotational-to-translational temperature ratio and the velocity cumulants in the homogeneous cooling state. Granular gases are typically modeled as a system of agitated inelastic hard spheres [1,2,3]. In the basic inelastic hard-sphere model (IHSM) of granular gases, the particles are assumed to be smooth (i.e., with no rotational degrees of freedom) and the collisions are characterized by a constant coefficient of normal restitution, \(0<\alpha \le 1\). If the number density of the gas is low enough, the most relevant physical quantity is the one-particle velocity distribution function (VDF), which obeys the Boltzmann equation appropriately adapted to incorporate the inelastic nature of collisions. Nevertheless, the fact that the collision rate in the IHSM is proportional to the relative velocity of the two colliding particles prevents the associated collisional moments from being expressible in terms of a finite number of velocity moments, thus hampering the possibility of deriving analytical results. The above difficulty is also present for molecular gases (where collisions are elastic, i.e., \({\alpha }=1\)). In that case, it can be overcome by assuming that the gas is made of Maxwell molecules, that is, particles interacting via a repulsive force inversely proportional to the fifth power of distance [3,4,5,6,7,8]. This makes the collision rate independent of the relative velocity, so that the collisional moments become bilinear combinations of velocity moments of the same or lower degree. If collisions are inelastic (\({\alpha }<1\)), one can still construct the so-called inelastic Maxwell model (IMM) by assuming an effective mean-field collision rate independent of the velocity [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44]. The original IHSM and the simpler IMM capture many of the most relevant features of granular gases [2, 45]. On the other hand, both models leave out the roughness of real grains, which gives rise to frictional collisions and energy transfer between the translational and rotational degrees of freedom. A convenient way of modeling this roughness effect is by augmenting the IHSM by means of a constant coefficient of tangential restitution \(-1\le \beta \le 1\) [46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89] The resulting granular-gas model can be referred to as the inelastic rough hard-sphere model (IRHSM). Needless to say, the IRHSM is even much more difficult to tackle with than the IHSM of frictionless, smooth particles. Therefore, in order to incorporate the influence of roughness and the associated rotational degrees of freedom on the dynamical properties and yet have a tractable model, it seems natural to construct an inelastic rough Maxwell model (IRMM), which would play a role with respect to the IRHSM similar to that played by the well-known IMM with respect to the IHSM (see Fig. 1). To the best of our knowledge, such an IRMM has not been proposed or worked out before. Sketch of four granular-gas models: IHSM, IMM, IRHSM, and IRMM The aim of this paper is to construct a tractable IRMM and illustrate its main features by evaluating exactly all the collisional velocity moments of first and second degree, as well as the most relevant ones of third and fourth degree. As expected, the results reduce to the known ones in the smooth limit (IMM) [41,42,43,44]. We also display the results for the conservative case of elastic and perfectly rough particles (\({\alpha }={\beta }=1\)), what constitutes the Maxwell analog of the Pidduck gas [90]. Moreover, we apply the exact knowledge of the isotropic second- and fourth-degree collisional moments to the study of the homogeneous cooling state (HCS), as described by the model. The organization of the paper is as follows. In Sect. 2, after summarizing the collision rules and the basic properties of the Boltzmann equation for a granular gas of inelastic and rough particles, and recalling the IRHSM, our IRMM is written down. The core of the paper is presented in Sect. 3, where the exact structure of the collisional moments is displayed in Table 1, the explicit expressions of the coefficients in terms of inelasticity and roughness being moved to Appendix A, while some particularizations and consistency tests are shown in Appendix B. Next, Sect. 4 is devoted to the application of the results to the HCS. Finally, the paper is closed in Sect. 5 with some concluding remarks. Kinetic Theory of Inelastic and Rough Particles We consider a granular gas made of inelastic and rough particles of diameter \(\sigma \), mass m, and moment of inertial \(I= (m\sigma ^2/4)\kappa \), where the dimensionless moment of inertia ranges from \(\kappa =0\) (mass concentrated at the center of the sphere) to a maximum value \(\kappa =\frac{2}{3}\) (mass concentrated at the spherical surface), the value \(\kappa =\frac{2}{5}\) corresponding to a uniform mass distribution. Collision Rules Let us denote by \(\mathbf {v}\) and \(\varvec{\omega }\) the translational and angular velocities, respectively, and introduce the short-hand notation \(\{\mathbf {v},\varvec{\omega }\}\rightarrow \varvec{\xi }\) and \(d\mathbf {v}d\varvec{\omega }\rightarrow d\varvec{\xi }\). A direct encounter between two particles 1 and 2 is characterized by the precollisional velocities \((\varvec{\xi }_1; \varvec{\xi }_2)\), the postcollisional velocities \((\varvec{\xi }_1'; \varvec{\xi }_2')\), and the collision unit vector \(\widehat{\varvec{\sigma }}=(\mathbf {r}_2-\mathbf {r}_1)/\sigma \) joining the centers of the two colliding particles. The pre- and postcollisional velocities are related by $$\begin{aligned} m \mathbf {v}_1^\prime =m\mathbf {v}_1 -\mathbf {Q},\quad&I \varvec{\omega }_1^\prime =I\varvec{\omega }_1 -{\sigma \over 2}\,\widehat{\varvec{\sigma }}\times \mathbf {Q}, \end{aligned}$$ $$\begin{aligned} m \mathbf {v}_2^\prime =m\mathbf {v}_2 +\mathbf {Q},\quad&I \varvec{\omega }_2^\prime =I\varvec{\omega }_2 -{\sigma \over 2}\,\widehat{\varvec{\sigma }}\times \mathbf {Q}, \end{aligned}$$ where \({\mathbf {Q}}\) is the impulse exerted by particle 1 on particle 2. The relative velocity of the points of the spheres at contact during a collision is \(\overline{\mathbf {g}}=\mathbf {g}-{\sigma \over 2}\widehat{\varvec{\sigma }}\times (\varvec{\omega }_1+\varvec{\omega }_2)\), where \(\mathbf {g}=\mathbf {v}_1-\mathbf {v}_2\) denotes the center-of-mass relative velocity; a similar relation holds for \(\overline{\mathbf {g}}'\). In the most basic model, an inelastic collision of two rough particles is characterized by $$\begin{aligned} \widehat{\varvec{\sigma }}\cdot \overline{\mathbf {g}}^\prime =-\alpha (\widehat{\varvec{\sigma }}\cdot \overline{\mathbf {g}}),\quad \widehat{\varvec{\sigma }}\times \overline{\mathbf {g}}^\prime =-\beta (\widehat{\varvec{\sigma }}\times \overline{\mathbf {g}}), \end{aligned}$$ where \(0<\alpha \le 1\) and \(-1\le \beta \le 1\) are the coefficients of normal and tangential restitution, respectively. For an elastic collision of perfectly smooth particles one has \(\alpha =1\) and \(\beta =-1\), while \(\alpha =1\) and \(\beta =1\) for an elastic encounter of perfectly rough particles. It is possible to show that Eqs. (2) imply [2, 79] $$\begin{aligned} \mathbf {Q}={m\widetilde{\alpha }}(\widehat{\varvec{\sigma }}\cdot \mathbf {g})\widehat{\varvec{\sigma }}-m\widetilde{\beta }\widehat{\varvec{\sigma }}\times \left( \widehat{\varvec{\sigma }}\times \mathbf {g}+\sigma {\varvec{\omega }_1+\varvec{\omega }_2\over 2}\right) , \end{aligned}$$ $$\begin{aligned} \widetilde{\alpha }\equiv {1+\alpha \over 2},\quad \widetilde{\beta }\equiv \frac{1+\beta }{2}\frac{\kappa }{1+\kappa }. \end{aligned}$$ Equations (1) and (3) express the postcollisional velocities in terms of the precollisional velocities and of the collision vector [91]. For a restitution encounter, the pre- and postcollisional velocities are denoted by \((\varvec{\xi }_1^*; \varvec{\xi }_2^*)\) and \((\varvec{\xi }_1; \varvec{\xi }_2)\), respectively, and the collision vector by \(\widehat{\varvec{\sigma }}^*=-\widehat{\varvec{\sigma }}\). One then has $$\begin{aligned} \widehat{\varvec{\sigma }}^*\cdot \mathbf {g}=-\alpha \left( \widehat{\varvec{\sigma }}^*\cdot \mathbf {g}^*\right) =-\widehat{\varvec{\sigma }}\cdot \mathbf {g},\quad d\varvec{\xi }_1^*d\varvec{\xi }_2^*={1\over \alpha \beta ^2} d\varvec{\xi }_1 d\varvec{\xi }_2. \end{aligned}$$ Boltzmann Equation Assuming molecular chaos, and in the absence of external forces or torques, the Boltzmann equation for granular gases is given by $$\begin{aligned} {\partial f \over \partial t}+\mathbf {v}\cdot \varvec{\nabla }f=J[\varvec{\xi }\vert f,f], \end{aligned}$$ where \(f(\mathbf {r},\varvec{\xi }, t)\) is the one-particle VDF and \(J[\varvec{\xi }\vert f,f]\) is the bilinear collision operator. Quite generally, it can be written as $$\begin{aligned} J[\varvec{\xi }_1\vert f,f]=\sigma ^2\int d\varvec{\xi }_2\int d\widehat{\varvec{\sigma }}\,\left[ \mathcal {F}(\widehat{\varvec{\sigma }}^*\cdot \mathbf {g}^*){f_1^*f_2^*\over \alpha \beta ^2} -\mathcal {F}(\widehat{\varvec{\sigma }}\cdot \mathbf {g})f_1f_2\right] , \end{aligned}$$ where use has been made of Eq. (5) and, as usual, the notation \(f_1=f(\varvec{\xi }_1)\), \(f_2=f(\varvec{\xi }_2)\), \(f_1^*=f(\varvec{\xi }_1^*)\), \(f_2^*=f(\varvec{\xi }_2^*)\) has been employed. Moreover, the function \(\mathcal {F}(x)\) is proportional to the collision rate, its precise form defining the chosen kinetic model. The so-called weak form of the Boltzmann equation is obtained by multiplying both sides of Eq. (6) by an arbitrary trial function \(\Psi (\mathbf {r},\varvec{\xi },t)\) and integrating over \(\varvec{\xi }\). This yields $$\begin{aligned} \partial _t \left( n\langle \Psi \rangle \right) +\varvec{\nabla }\cdot \left( n \langle \mathbf {v}\Psi \rangle \right) -n\langle (\partial _t+\mathbf {v}\cdot \varvec{\nabla })\Psi \rangle = n\mathcal {J}[\Psi ], \end{aligned}$$ $$\begin{aligned} n(\mathbf {r},t)=\int d\varvec{\xi }\, f(\mathbf {r},\varvec{\xi },t) \end{aligned}$$ is the local number density, $$\begin{aligned} \langle \Psi \rangle ={1\over n(\mathbf {r},t)}\int d\varvec{\xi }\, \Psi (\mathbf {r},\varvec{\xi },t)f(\mathbf {r},\varvec{\xi },t) \end{aligned}$$ is the local average value of \(\Psi \), and $$\begin{aligned} \mathcal {J}[\Psi ]\equiv&\frac{1}{n}\int d\varvec{\xi }\,\Psi (\mathbf {r},\varvec{\xi },t)J[\varvec{\xi }\vert f,f]\nonumber \\ =&\frac{\sigma ^2}{2n}\int d\varvec{\xi }_1\int d\varvec{\xi }_2\int d\widehat{\varvec{\sigma }}\,\mathcal {F}(\widehat{\varvec{\sigma }}\cdot \mathbf {g})f_1f_2 \left( \Psi _1'+\Psi _2'-\Psi _1-\Psi _2\right) \end{aligned}$$ is the collisional production term of \(\Psi \), with the conventional notation \(\Psi _1=\Psi (\varvec{\xi }_1)\), \(\Psi _2=\Psi (\varvec{\xi }_2)\), \(\Psi _1'=\Psi (\varvec{\xi }_1')\), \(\Psi _2'=\Psi (\varvec{\xi }_2')\). In the second step of Eq. (11), we have used the relationships (5) and the standard symmetry properties of the collision term. In particular, the flow velocity (\(\mathbf {u}\)), the mean angular velocity (\(\varvec{\Omega }\)), and the granular temperatures (\(T_t\), \(T_r\), and T) correspond to \(\Psi =\mathbf {v}\), \(\Psi =\varvec{\omega }\), \(\Psi =V^2\), \(\Psi =\omega ^2\), and \(\Psi =mV^2+I\omega ^2\), respectively, where \(\mathbf {V}=\mathbf {v}-\mathbf {u}\) denotes the peculiar velocity. Thus, $$\begin{aligned} \mathbf {u}=\langle \mathbf {v}\rangle ,\quad \varvec{\Omega }=\langle \varvec{\omega }\rangle ,\quad T_t=\frac{m}{3}\langle V^2\rangle ,\quad T_r=\frac{I}{3}\langle \omega ^2\rangle ,\quad T=\frac{1}{2}\left( T_t+T_r\right) . \end{aligned}$$ The associated collisional production terms can be written as [79, 84] $$\begin{aligned}&\mathcal {J}[\mathbf {v}]=0,\quad \mathcal {J}[\varvec{\omega }]=-\zeta _\Omega \varvec{\Omega },\quad \mathcal {J}[V^2]=-\zeta _t\frac{3T_t}{m},\quad \mathcal {J}[\omega ^2]=-\zeta _r\frac{3T_r}{I}, \end{aligned}$$ $$\begin{aligned}&\mathcal {J}[mV^2+I\omega ^2]=-6\zeta T,\quad \zeta =\frac{T_t}{2T}\zeta _t+\frac{T_r}{2T}\zeta _r=\frac{\zeta _t+\theta \zeta _r}{1+\theta }. \end{aligned}$$ This defines the "de-spinning" rate coefficient \(\zeta _\Omega \), the energy production rates \(\zeta _t\) and \(\zeta _r\), and the cooling rate \(\zeta \). Moreover, \(\theta \equiv T_r/T_t\) is the rotational-to-translational temperature ratio. The Inelastic Rough Hard-Sphere Model (IRHSM) If the gas is modeled by the IRHSM, one has \(\mathcal {F}(x)=\Theta (x)x\), where \(\Theta (x)\) is the Heaviside step function. In that case, the collision operator becomes $$\begin{aligned} J_{\text {HS}}[\varvec{\xi }_1\mid f,f]=\sigma ^2\int d\varvec{\xi }_2\int _{+} d\widehat{\varvec{\sigma }}\,(\widehat{\varvec{\sigma }}\cdot \mathbf {g})\left( {f_1^*f_2^*\over \alpha ^2\beta ^2} -f_1f_2\right) , \end{aligned}$$ where the subscript (\(+\)) in the integration over \(\widehat{\varvec{\sigma }}\) denotes the constraint \(\widehat{\varvec{\sigma }}\cdot \mathbf {g}>0\) and we have taken into account that \(\Theta (\widehat{\varvec{\sigma }}^*\cdot \mathbf {g}^*)(\widehat{\varvec{\sigma }}^*\cdot \mathbf {g}^*)=\alpha ^{-1}\Theta (\widehat{\varvec{\sigma }}\cdot \mathbf {g})(\widehat{\varvec{\sigma }}\cdot \mathbf {g})\). Analogously, Eq. (11) becomes $$\begin{aligned} \mathcal {J}_{\text {HS}}[\Psi ]= \frac{\sigma ^2}{2n}\int d\varvec{\xi }_1\int d\varvec{\xi }_2\int _+ d\widehat{\varvec{\sigma }}\,(\widehat{\varvec{\sigma }}\cdot \mathbf {g})f_1f_2 \left( \Psi _1'+\Psi _2'-\Psi _1-\Psi _2\right) . \end{aligned}$$ If \(\Psi (\varvec{\xi })\) is a polynomial and thus \(\langle \Psi \rangle \) is a velocity moment, the collisional moment \(\mathcal {J}_{\text {HS}}[\Psi ]\) involves the full VDF or, equivalently, all the higher-degree moments. As a consequence, the infinite hierarchy of moments given by Eq. (8) cannot be solved, even in spatially uniform states, unless an approximate closure is applied. For instance, if f is approximated by a two-temperature Maxwellian, one finds [2, 75, 84] $$\begin{aligned} \zeta _\Omega ^\text {HS}\simeq \frac{5\nu _\text {HS}}{3}\frac{\widetilde{\beta }}{\kappa },\quad \nu _\text {HS}\equiv \frac{16}{5}n\sigma ^2\sqrt{\pi T_t/m}, \end{aligned}$$ $$\begin{aligned} \zeta _t^\text {HS}\simeq \frac{5\nu _\text {HS}}{3}\left[ \widetilde{\alpha }(1-\widetilde{\alpha })+\widetilde{\beta }(1-\widetilde{\beta })-\frac{\widetilde{\beta }^2}{\kappa }\theta (1+X)\right] ,\quad X\equiv \frac{I\Omega ^2}{3T_r}, \end{aligned}$$ $$\begin{aligned} \zeta _r^\text {HS}\simeq \frac{5\nu _\text {HS}}{3}\frac{\widetilde{\beta }}{\kappa }\left[ \left( 1-\frac{\widetilde{\beta }}{\kappa }\right) \left( 1+X\right) -\frac{\widetilde{\beta }}{\theta }\right] , \end{aligned}$$ (16c) $$\begin{aligned} \zeta ^\text {HS}\simeq \frac{5}{12}\frac{\nu _\text {HS}}{1+\theta }\left[ 1-{\alpha }^2+\frac{1-{\beta }^2}{1+\kappa }\theta \left( \frac{\kappa }{\theta }+1+X\right) \right] . \end{aligned}$$ (16d) The Inelastic Rough Maxwell Model (IRMM) The problem mentioned below Eq. (15) is also present in the conventional case of elastic collisions. However, if the collision rate is assumed to be a constant (Maxwell model), the collisional moments involve moments of a degree equal to or lower than the degree of the moment \(\langle \Psi \rangle \) [3, 5,6,7,8]. We now apply the same philosophy to a granular gas of inelastic and rough particles and propose the IRMM by choosing \(\mathcal {F}(x)=\text {const}\) in Eq. (7), i.e., $$\begin{aligned} J_{\text {M}}[\varvec{\xi }_1\vert f,f]=\frac{\nu _{\text {M}}}{4\pi n}\int d\varvec{\xi }_2\int d\widehat{\varvec{\sigma }}\,\left( {f_1^*f_2^*\over \alpha \beta ^2} -f_1f_2\right) , \end{aligned}$$ where \(\nu _{\text {M}}\) is an effective collision frequency. Analogously, Eq. (11) becomes $$\begin{aligned} \mathcal {J}_{\text {M}}[\Psi ]= \frac{\nu _{\text {M}}}{8\pi n^2}\int d\varvec{\xi }_1\int d\varvec{\xi }_2\int d\widehat{\varvec{\sigma }}\,f_1f_2 \left( \Psi _1'+\Psi _2'-\Psi _1-\Psi _2\right) . \end{aligned}$$ Table 1 Collisional moments according to the IRMM, Eqs. (17 and 18). Table 2 Comparison between the basic production rates in the IRHSM and IRMM with two main choices of \(\nu _{\text {M}}\). Collisional Moments Using the collision rules (1) and (3) in Eq. (18), it is possible to evaluate the collisional moments in terms of velocity moments. After some tedious work, we obtained all the collisional moments of first and second degree, plus the most relevant ones of third and fourth degree. Their structure is shown in Table 1, where the expected result \(\mathcal {J}_{\text {M}}[\mathbf {V}]=0\) (momentum conservation) is not included. This yields a total number of 13 independent collisional moments and 60 dimensionless coefficients. We have adopted the following criterion for the notation of the coefficients. Let us first denote by \(\Psi _{k_1k_2}(\varvec{\xi })=\Psi _{k_1}^{(1)}(\mathbf {V})\Psi _{k_2}^{(2)}(\varvec{\omega })\) a homogeneous velocity polynomial of degrees \(k_1\) and \(k_2\) with respect to \(\mathbf {V}\) and \(\varvec{\omega }\), respectively, i.e., \(\Psi _{k_1}^{(1)}(\lambda \mathbf {V})=\lambda ^{k_1}\Psi _{k_1}^{(1)}(\mathbf {V})\) and \(\Psi _{k_2}^{(2)}(\lambda \varvec{\omega })=\lambda ^{k_2}\Psi _{k_2}^{(2)}(\varvec{\omega })\). Then, a coefficient of the form \(Y_{k_1 k_2\mid \ell _1\ell _2}\) (with \(k_1+k_2=\ell _1+\ell _2\)) corresponds to the collisional moment \(\mathcal {J}_{\text {M}}[\Psi _{k_1k_2}]\) and accompanies a product of the form \(\langle \Psi _{i_1i_2}(\varvec{\xi })\rangle \langle \Psi _{j_1j_2}(\varvec{\xi })\rangle \) with \(i_1+j_1=\ell _1\) and \(i_2+j_2=\ell _2\). When, for a given \(\mathcal {J}_{\text {M}}[\Psi _{k_1k_2}]\), more than a coefficient of the form \(Y_{k_1 k_2\mid \ell _1\ell _2}\) exists, each one of them is distinguished by a superscript. A coefficient of the form \(Y_{k_1 k_2\mid k_1k_2}\) or \(Y_{k_1 k_2\mid k_1k_2}^{(1)}\) is a diagonal one because it couples \(\mathcal {J}_{\text {M}}[\Psi _{k_1k_2}]\) to \(\langle \Psi _{k_1k_2}(\varvec{\xi })\rangle \). Finally, we use the Greek letters \(Y=\chi \), \(Y=\varphi \), and \(Y=\psi \) for the coefficients in \(\mathcal {J}_{\text {M}}[\Psi _{k_1k_2}]\) associated with scalar, vector, and tensor quantities \(\Psi _{k_1k_2}(\varvec{\xi })\), respectively; moreover, an overline is used if the definition of \(\Psi _{k_1k_2}(\varvec{\xi })\) contains the inner product \(\mathbf {V}\cdot \varvec{\omega }\). It must be remarked that the results for \(\Psi _{01}(\varvec{\xi })=\varvec{\omega }\), \(\Psi _{20}(\varvec{\xi })=V^2\), and \(\Psi _{02}(\varvec{\xi })=\omega ^2\) are also valid in the case of a hybrid model where \(\mathcal {F}(x)=\text {const}\times \Theta (x)\). However, this does not happen in the case of the other quantities in Table 1, that is, the associated collisional moments cannot be expressed only in terms of velocity moments of the same or lower degree if \(\mathcal {F}(x)=\text {const}\times \Theta (x)\). The exact explicit expressions for the 60 coefficients appearing in Table 1 in terms of \(\widetilde{\alpha }\), \(\widetilde{\beta }\), and \(\kappa \) are given in Appendix A. The particularization of the coefficients to the IMM (\({\alpha }<1\), \({\beta }=-1\)) and to the Pidduck model (\({\alpha }={\beta }=1\)) [90] is presented in Appendix B, where some consistency tests are performed. The basic production rates are those defined by Eqs. (13). According to Table 1, $$\begin{aligned} \zeta _\Omega ^{\text {M}}=\nu _{\text {M}}\varphi _{01\mid 01} =\frac{4\nu _{\text {M}}}{3}\frac{\widetilde{\beta }}{\kappa }, \end{aligned}$$ $$\begin{aligned} \zeta _t^{\text {M}}=&\nu _{\text {M}}\left[ \chi _{20\mid 20}+\frac{4\theta }{\kappa }\chi _{20\mid 02}\left( 1+X\right) \right] \nonumber \\ =&\frac{2\nu _{\text {M}}}{3}\left[ \widetilde{\alpha }(1-\widetilde{\alpha })+2\widetilde{\beta }(1-\widetilde{\beta })-\frac{2\widetilde{\beta }^2}{\kappa }\theta (1+X)\right] , \end{aligned}$$ $$\begin{aligned} \zeta _r^{\text {M}}=\nu _{\text {M}}\left[ \frac{\kappa }{4\theta }\chi _{02\mid 20}+\chi _{02\mid 02}\left( 1+X\right) \right] =\frac{4\nu _{\text {M}}}{3}\frac{\widetilde{\beta }}{\kappa }\left[ \left( 1-\frac{\widetilde{\beta }}{\kappa }\right) \left( 1+X\right) -\frac{\widetilde{\beta }}{\theta }\right] , \end{aligned}$$ $$\begin{aligned} \zeta ^{\text {M}}=&\frac{\nu _{\text {M}}}{1+\theta }\left[ \chi _{20\mid 20}+\frac{\kappa }{4}\chi _{02\mid 20}+\left( \frac{4}{\kappa }\chi _{20\mid 02}+\chi _{02\mid 02}\right) \theta \left( 1+X\right) \right] \nonumber \\ =&\frac{1}{6}\frac{\nu _{\text {M}}}{1+\theta }\left[ 1-{\alpha }^2+2\frac{1-{\beta }^2}{1+\kappa }\theta \left( \frac{\kappa }{\theta }+1+X\right) \right] , \end{aligned}$$ where in the second equalities we have made use of Eqs. (A1, A2 and A4). Comparison with the approximate results for the IRHSM, Eqs. (16), shows that the choices \(\nu _{\text {M}}=\frac{5}{4}\nu _\text {HS}\) and \(\nu _{\text {M}}=\frac{5}{2}\nu _\text {HS}\) are directly related to inelasticity and roughness, respectively. Thus, in comparison with the IRHSM, the IRMM lessens the impact of inelasticity on energy dissipation, relative to the impact of roughness. As a consequence, there is no a unique choice for \(\nu _{\text {M}}\) allowing for an agreement between the IRHSM and IRMM basic production rates for arbitrary \(\alpha \) and \(\beta \). This is reminiscent of the inability of the Bhatnagar–Gross–Krook (BGK) kinetic model to reproduce the Boltzmann shear viscosity and thermal conductivity with a single collision frequency [92]. A way of circumventing the impossibility of matching Eqs. (16 and 19) with a unique relationship between \(\nu _\text {HS}\) and \(\nu _{\text {M}}\) consists of choosing \(\nu _{\text {M}}=\frac{5}{4}\nu _\text {HS}\) and then assuming the following mapping between the coefficients of normal restitution in the IRHSM (\(\alpha _{\text {HS}}\)) and the IRMM (\(\alpha _{\text {M}}\)): \(1-\alpha _{\text {M}}^2=2(1-\alpha _{\text {HS}}^2)\). For instance, \(\alpha _{\text {M}}=0.4\), 0.6, 0.8, and 1 would correspond to \(\alpha _{\text {HS}}=0.76\), 0.82, 0.91, and 1, respectively. Table 2 summarizes the consequences of the two main choices \(\nu _{\text {M}}=\frac{5}{4}\nu _\text {HS}\) and \(\nu _{\text {M}}=\frac{5}{2}\nu _\text {HS}\). Alternatively, and in analogy with a BGK-like model for the IRHSM [93], one can replace the IRMM collision operator (17) by $$\begin{aligned} J_{\text {M}}[\varvec{\xi }\mid f,f]\rightarrow J_{\text {M}}[\varvec{\xi }\mid f,f]+\nu _{\text {M}}\gamma \frac{\partial }{\partial \mathbf {V}}\cdot \left( \mathbf {V}f\right) ,\quad \gamma \equiv \frac{1-\alpha ^2}{12}. \end{aligned}$$ This modified IRMM keeps being amenable to an exact evaluation of the associated collisional moments and, in addition, allows one to recover Eqs. (16) if \(\nu _{\text {M}}=\frac{5}{4}\nu _\text {HS}\). More specifically, only the diagonal coefficients are affected by the modification (20): \(Y_{k_1k_2\mid k_1k_2}\rightarrow Y_{k_1k_2\mid k_1k_2}+k_1\gamma \) and \(Y_{k_1k_2\mid k_1k_2}^{(1)}\rightarrow Y_{k_1k_2\mid k_1k_2}^{(1)}+k_1\gamma \). On the other hand, in this paper we restrict ourselves to the IRMM as defined by Eq. (17), that is, without the extra term appearing in Eq. (20). The reason is that we regard the IRMM as a mathematical model on its own and not necessarily as a model intended to mimic the properties of the IRHSM, for which alternative approximate tools are already available [2, 75,76,77,78,79,80,81,82,83,84,85,86,87,88,89]. Application to the Homogeneous Cooling State (HCS) In the absence of gradients or any external driving, Eq. (8) becomes $$\begin{aligned} \frac{d}{dt}\langle \Psi \rangle =\mathcal {J}[\Psi ]. \end{aligned}$$ For sufficiently long times, the system asymptotically reaches the HCS, which is characterized by a uniform, isotropic, and stationary scaled VDF [2, 84] $$\begin{aligned} \phi (\mathbf {c},\mathbf {w})=\frac{\sqrt{4T_tT_r/mI}}{n}f(\mathbf {V},\varvec{\omega };t),\quad \mathbf {c}\equiv \frac{\mathbf {V}}{\sqrt{2T_t(t)/m}},\quad \mathbf {w}\equiv \frac{\varvec{\omega }}{\sqrt{2T_r(t)/m}}. \end{aligned}$$ In particular, \(X\rightarrow 0\) and \(\dot{\theta }\rightarrow 0\). The latter condition implies \(\zeta _t-\zeta _r\rightarrow 0\), which yields the HCS temperature ratio $$\begin{aligned} \theta =h+\sqrt{1+h^2}, \end{aligned}$$ $$\begin{aligned} h\equiv \frac{\kappa }{8}\frac{\chi _{02\mid 02}-\chi _{20\mid 20}}{\chi _{20\mid 02}}=\frac{1+\kappa }{2\kappa (1+{\beta })}\left[ c\frac{1+\kappa }{2}\frac{1-{\alpha }^2}{1+{\beta }}-(1-\kappa )(1-{\beta })\right] , \end{aligned}$$ with \(c=1\). As expected from the discussion in Sect. 3, in the case of the IRHSM, the two-temperature Maxwellian approximation [see Eqs. (16)] also yields Eqs. (23), except that \(c=2\) [2, 84]. Plot of the temperature ratios a \(T_r/T_t=\theta \) and b \(T_r/T=2\theta /(1+\theta )\) as functions of \({\beta }\) for \(\kappa =\frac{2}{5}\) and \(\alpha = 0.4\) (—), 0.6 (–\(\cdot \)–), 0.8 (– –), and 1 (\(\cdots \)). Figure 2 shows the \(\beta \)-dependence of the temperature ratios \(T_r/T_t\) and \(T_r/T\) for uniform particles (\(\kappa =\frac{2}{5}\)) and several representative values of the coefficient of normal restitution. We observe that, except in the case of elastic collisions (\({\alpha }=1\)), \(T_r/T_t\rightarrow \infty \) and, hence, \(T_r/T\rightarrow 2\) in the quasi-smooth limit (\({\beta }\rightarrow -1\)). The scaled fourth-degree moments are \(\langle c^4\rangle =\frac{9}{4} \langle V^4\rangle /\langle V^2\rangle ^2\), \(\langle w^4\rangle =\frac{9}{4}\langle \omega ^4\rangle /\langle \omega ^2\rangle ^2\), \(\langle c^2w^2\rangle =\frac{9}{4}\langle V^2\omega ^2\rangle /\langle V^2\rangle \langle \omega ^2\rangle \), and \(\langle (\mathbf {c}\cdot \mathbf {w})^2\rangle =\frac{9}{4}\langle (\mathbf {V}\cdot \varvec{\omega })^2\rangle /\langle V^2\rangle \langle \omega ^2\rangle \). Assuming the stationary VDF \(\phi (\mathbf {c},\mathbf {w})\) has not been reached yet, the evolution equations for those moments are $$\begin{aligned} \frac{d\langle c^4\rangle }{\nu _{\text {M}}dt}=&\frac{1}{\langle V^2\rangle ^2}\frac{\mathcal {J}_{\text {M}}[V^4]}{\nu _{\text {M}}} -2\frac{\langle V^4\rangle }{\langle V^2\rangle ^3}\frac{\mathcal {J}_{\text {M}}[V^2]}{\nu _{\text {M}}}\nonumber \\ =&-\chi _{40\mid 40}^{(1)}\frac{\langle V^4\rangle }{\langle V^2\rangle ^2}-\chi _{40\mid 40}^{(2)}-\frac{1}{3}\chi _{40\mid 40}^{(3)}-\frac{16\theta ^2}{\kappa ^2}\chi _{40\mid 04}\left( \frac{\langle \omega ^4\rangle }{\langle \omega ^2\rangle ^2}+\frac{5}{3}\right) -\frac{4\theta }{\kappa }\chi _{40\mid 22}^{(1)}\nonumber \\&\times \left[ 2\frac{\langle V^2\omega ^2\rangle }{\langle V^2\rangle \langle \omega ^2\rangle } -\frac{\langle (\mathbf {V}\cdot \varvec{\omega })^2\rangle }{\langle V^2\rangle \langle \omega ^2\rangle }+\frac{5}{3}\right] +2\left( \chi _{20\mid 20 }+\frac{4\theta }{\kappa }\chi _{20\mid 02 }\right) \frac{\langle V^4\rangle }{\langle V^2\rangle ^2}, \end{aligned}$$ $$\begin{aligned} \frac{d\langle w^4\rangle }{\nu _{\text {M}}dt}=&\frac{1}{\langle \omega ^2\rangle ^2}\frac{\mathcal {J}_{\text {M}}[\omega ^4]}{\nu _{\text {M}}} -2\frac{\langle \omega ^4\rangle }{\langle \omega ^2\rangle ^3}\frac{\mathcal {J}_{\text {M}}[\omega ^2]}{\nu _{\text {M}}}\nonumber \\ =&-\frac{\kappa ^2}{16\theta ^2}\chi _{04\mid 40}\left( \frac{\langle V^4\rangle }{\langle V^2\rangle ^2}+\frac{5}{3}\right) -\frac{\kappa }{4\theta }\chi _{04\mid 22 }^{(1)}\left[ 2\frac{\langle V^2\omega ^2\rangle }{\langle V^2\rangle \langle \omega ^2\rangle }- \frac{\langle (\mathbf {V}\cdot \varvec{\omega })^2\rangle }{\langle V^2\rangle \langle \omega ^2\rangle }+\frac{5}{3}\right] \nonumber \\&-\chi _{04\mid 04}^{(1)}\frac{\langle \omega ^4\rangle }{\langle \omega ^2\rangle ^2}-\chi _{04\mid 04}^{(2)}-\frac{1}{3}\chi _{04\mid 04}^{(3)} +2\left( \chi _{02\mid 02 }+\frac{\kappa }{4\theta }\chi _{02\mid 20 }\right) \frac{\langle \omega ^4\rangle }{\langle \omega ^2\rangle ^2}, \end{aligned}$$ $$\begin{aligned} \frac{d\langle c^2w^2\rangle }{\nu _{\text {M}}dt}=&\frac{1}{\langle V^2\rangle \langle \omega ^2\rangle }\frac{\mathcal {J}_{\text {M}}[V^2\omega ^2]}{\nu _{\text {M}}}-\frac{\langle V^2\omega ^2\rangle }{\langle V^2\rangle ^2 \langle \omega ^2\rangle }\frac{\mathcal {J}_{\text {M}}[V^2]}{\nu _{\text {M}}}-\frac{\langle V^2\omega ^2\rangle }{\langle V^2\rangle \langle \omega ^2\rangle ^2}\frac{\mathcal {J}_{\text {M}}[\omega ^2]}{\nu _{\text {M}}}\nonumber \\ =&-\frac{\kappa }{4\theta }\chi _{22\mid 40}^{(1)}\left( \frac{\langle V^4\rangle }{\langle V^2\rangle ^2}+1\right) -\frac{\kappa }{12\theta }\chi _{22\mid 40}^{(2)} -\chi _{22\mid 22}^{(1)}\frac{\langle V^2\omega ^2\rangle }{\langle V^2\rangle \langle \omega ^2\rangle }-\chi _{22\mid 22}^{(2)}\nonumber \\&-\chi _{22\mid 22}^{(3)}\frac{\langle (\mathbf {V}\cdot \varvec{\omega })^2\rangle }{\langle V^2\rangle \langle \omega ^2\rangle }-\frac{1}{3}\chi _{22\mid 22}^{(4)}-\frac{4\theta }{\kappa }\chi _{22\mid 04}^{(1)}\left( \frac{\langle \omega ^4\rangle }{\langle \omega ^2\rangle ^2}+1\right) -\frac{4\theta }{3\kappa } \chi _{22\mid 04}^{(2)}\nonumber \\&+\left( \chi _{20\mid 20}+\frac{4\theta }{\kappa }\chi _{20\mid 02}+\chi _{02\mid 02}+\frac{\kappa }{4\theta }\chi _{02\mid 20}\right) \frac{\langle V^2\omega ^2\rangle }{\langle V^2\rangle \langle \omega ^2\rangle }, \end{aligned}$$ $$\begin{aligned} \frac{d\langle (\mathbf {c}\cdot \mathbf {w})^2\rangle }{\nu _{\text {M}}dt}=&\frac{1}{\langle V^2\rangle \langle \omega ^2\rangle }\frac{\mathcal {J}_{\text {M}}[(\mathbf {V}\cdot \varvec{\omega })^2]}{\nu _{\text {M}}}-\frac{\langle (\mathbf {V}\cdot \varvec{\omega })^2\rangle }{\langle V^2\rangle ^2 \langle \omega ^2\rangle }\frac{\mathcal {J}_{\text {M}}[V^2]}{\nu _{\text {M}}}-\frac{\langle (\mathbf {V}\cdot \varvec{\omega })^2\rangle }{\langle V^2\rangle \langle \omega ^2\rangle ^2}\frac{\mathcal {J}_{\text {M}}[\omega ^2]}{\nu _{\text {M}}}\nonumber \\ =&-\frac{\kappa }{6\theta }\overline{\chi }_{22\mid 40}-\overline{\chi }_{22\mid 22}^{(1)}\frac{\langle (\mathbf {V}\cdot \varvec{\omega })^2\rangle }{\langle V^2\rangle \langle \omega ^2\rangle }-\overline{\chi }_{22\mid 22}^{(2)}\frac{\langle V^2\omega ^2\rangle }{\langle V^2\rangle \langle \omega ^2\rangle }-\overline{\chi }_{22\mid 22}^{(3)}-\frac{1}{3}\overline{\chi }_{22\mid 22}^{(4)}\nonumber \\&-\frac{8\theta }{3\kappa }\overline{\chi }_{22\mid 04} +\left( \chi _{20\mid 20}+\frac{4\theta }{\kappa }\chi _{20\mid 02}+\chi _{02\mid 02}+\frac{\kappa }{4\theta }\chi _{02\mid 20}\right) \frac{\langle (\mathbf {V}\cdot \varvec{\omega })^2\rangle }{\langle V^2\rangle \langle \omega ^2\rangle }. \end{aligned}$$ The departure of the HCS scaled VDF \(\phi (\mathbf {c},\mathbf {w})\) from the Maxwellian \(\pi ^{-3}e^{-c^2-w^2}\) can be measured by the four cumulants [80,81,82,83] $$\begin{aligned} a_{20}^{(0)}\equiv \frac{3}{5}\frac{\langle V^4\rangle }{\langle V^2\rangle ^2}-1=\frac{4}{15}\langle c^4\rangle -1,\quad a_{02}^{(0)}\equiv \frac{3}{5}\frac{\langle \omega ^4\rangle }{\langle \omega ^2\rangle ^2}-1=\frac{4}{15}\langle w^4\rangle -1, \end{aligned}$$ $$\begin{aligned} a_{11}^{(0)}\equiv \frac{\langle V^2\omega ^2\rangle }{\langle V^2\rangle \langle \omega ^2\rangle }-1=\frac{4}{9}\langle c^2w^2\rangle -1, \end{aligned}$$ $$\begin{aligned} a_{00}^{(1)}\equiv \frac{6}{5}\frac{\langle (\mathbf {V}\cdot \varvec{\omega })^2\rangle -\frac{1}{3}\langle V^2\omega ^2\rangle }{\langle V^2\rangle \langle \omega ^2\rangle }=\frac{8}{15}\left[ \langle (\mathbf {c}\cdot \mathbf {w})^2\rangle -\frac{1}{3}\langle c^2w^2\rangle \right] . \end{aligned}$$ In terms of those cumulants, Eqs. (24) can be recast as $$\begin{aligned} \nu _{\text {M}}^{-1}\frac{d}{ dt}\left[ \begin{array}{c} a_{20}^{(0)}\\ a_{02}^{(0)}\\ a_{11}^{(0)}\\ a_{00}^{(1)} \end{array} \right] + \mathsf {M}\cdot \left[ \begin{array}{c} a_{20}^{(0)}\\ a_{02}^{(0)}\\ a_{11}^{(0)}\\ a_{00}^{(1)} \end{array} \right] =-\mathsf {L}, \end{aligned}$$ where the elements of the matrices \(\mathsf {M}\) and \(\mathsf {L}\) are $$\begin{aligned} M_{11}=\chi _{40\mid 40}^{(1)}-2\chi _{20\mid 20}-\frac{8\theta }{\kappa }\chi _{20\mid 02}, \end{aligned}$$ $$\begin{aligned} M_{12}=\frac{16\theta ^2}{\kappa ^2}\chi _{40\mid 04},\quad M_{13}=\frac{4\theta }{\kappa }\chi _{40\mid 22}^{(1)},\quad M_{14}=-\frac{2\theta }{\kappa }\chi _{40\mid 22}^{(1)}, \end{aligned}$$ $$\begin{aligned} M_{21}=\frac{\kappa ^2}{16\theta ^2}\chi _{04\mid 40},\quad M_{22}=\chi _{04\mid 04}^{(1)}-2\chi _{02\mid 02}-\frac{\kappa }{2\theta }\chi _{02\mid 20}, \end{aligned}$$ $$\begin{aligned} M_{23}=\frac{\kappa }{4\theta }\chi _{04\mid 22}^{(1)},\quad M_{24}=-\frac{\kappa }{8\theta }\chi _{04\mid 22}^{(1)}, \end{aligned}$$ $$\begin{aligned} M_{31}=\frac{\kappa }{4\theta }\chi _{22\mid 40}^{(1)},\quad M_{32}=\frac{4\theta }{\kappa }\chi _{22\mid 04}^{(1)},\quad M_{34}=\frac{1}{2}\chi _{22\mid 22}^{(3)}, \end{aligned}$$ (27e) $$\begin{aligned} M_{33}=\frac{3}{5}\left[ \chi _{22\mid 22}^{(1)}-\chi _{20\mid 20}-\frac{4\theta }{\kappa }\chi _{20\mid 02}-\chi _{02\mid 02}-\frac{\kappa }{4\theta }\chi _{02\mid 20}\right] +\frac{1}{5}\chi _{22\mid 22}^{(3)}, \end{aligned}$$ (27f) $$\begin{aligned} M_{41}=0,\quad M_{42}=0,\quad M_{43}=\overline{\chi }_{22\mid 22}^{(2)}+\frac{2}{5}M_{44}, \end{aligned}$$ $$\begin{aligned} M_{44}=\frac{5}{6}\left[ \overline{\chi }_{22\mid 22}^{(1)}-\chi _{20\mid 20}-\frac{4\theta }{\kappa }\chi _{20\mid 02}-\chi _{02\mid 02}-\frac{\kappa }{4\theta }\chi _{02\mid 20}\right] , \end{aligned}$$ $$\begin{aligned} L_1=M_{11}+2M_{12}+2M_{13}+\frac{3}{5}\chi _{40\mid 40}^{(2)}+\frac{1}{5}\chi _{40\mid 40}^{(3)} , \end{aligned}$$ $$\begin{aligned} L_2=2M_{21}+M_{22}+2M_{23}+\frac{3}{5}\chi _{04\mid 04}^{(2)}+\frac{1}{5}\chi _{04\mid 04}^{(3)}, \end{aligned}$$ $$\begin{aligned} L_3=\frac{8}{5}M_{31}+\frac{8}{5}M_{32}+M_{33}+\frac{\kappa }{20\theta }\chi _{22\mid 40}^{(2)}+\frac{3}{5}\chi _{22\mid 22}^{(2)}+\frac{1}{5}\chi _{22\mid 22}^{(4)}+\frac{4\theta }{5\kappa }\chi _{22\mid 04}^{(2)}, \end{aligned}$$ $$\begin{aligned} L_4=M_{43}+\frac{\kappa }{6\theta }\overline{\chi }_{22\mid 40}+\overline{\chi }_{22\mid 22}^{(3)}+\frac{1}{3}\overline{\chi }_{22\mid 22}^{(4)}+\frac{8\theta }{3\kappa }\overline{\chi }_{22\mid 04}. \end{aligned}$$ a Plot of the smallest eigenvalue \(\mu _{\min }\) as a function of \({\beta }\) for \(\kappa =\frac{2}{5}\) and \(\alpha = 0.4\) (—), 0.6 (–\(\cdot \)–), 0.8 (– –), and 1 (\(\cdots \)). The circles denote the location of the threshold value (\(\beta _0\)) below which \(\mu _{\min }<0\). b Plot of \(\beta _0\) as a function of \(\alpha \) for \(\kappa =\frac{2}{5}\). The fourth-degree moments diverge in the region below the curve. The time evolution of the cumulants is characterized by the eigenvalues of the matrix \(\mathsf {M}\), the asymptotic behavior being governed by the smallest eigenvalue, \(\mu _{\min }\). If \(\mu _{\min }>0\), the cumulants relax to their finite stationary values \(\left[ a_{20}^{(0)},a_{02}^{(0)},a_{11}^{(0)},a_{00}^{(1)}\right] ^\dagger =-\mathsf {M}^{-1}\cdot \mathsf {L}\). On the other hand, if \(\mu _{\min }<0\) then the cumulants diverge. Figure 3a shows the \({\beta }\)-dependence of \(\mu _{\min }\) for the same representative values of \({\alpha }\) as in Fig. 2. We observe that, at a given \({\alpha }<1\), there exists a threshold value \({\beta }_0\) such that \(\mu _{\min }<0\) if \({\beta }<{\beta }_0\). The \({\alpha }\)-dependence of \({\beta }_0\) is shown in Fig. 3b. For the pairs \(({\alpha },{\beta })\) below the curve in Fig. 3b, the cumulants, and hence the fourth-degree moments, diverge. This divergence is likely associated with an algebraic high-velocity tail of the VDF, as already present in the case of the IMM [34,35,36,37,38, 41,42,43,44]. If the particles are perfectly smooth (\({\beta }=-1\)), the temperature ratio \(\theta \) is irrelevant and all the matrix elements vanish except \(M_{11}=\frac{1}{120}(1+{\alpha })^2(5+6{\alpha }-3{\alpha }^2)\), \(M_{33}=\frac{1}{20}(1+{\alpha })^2\), \(M_{43}=\frac{1}{36}(1+{\alpha })^2\), \(M_{44}=\frac{1}{9}(1+{\alpha })^2\), and \(L_1=-\frac{1}{20}(1-{\alpha }^2)^2\). This implies that \(a_{02}^{(0)}\) is arbitrary, \(a_{11}^{(0)}=a_{00}^{(1)}=0\), and \(a_{20}^{(0)}=6(1-{\alpha })^2/(5+6{\alpha }-3{\alpha }^2)\). The latter result agrees with a previous derivation for the IMM [39]. On the other hand, in the special case of the Pidduck gas (\({\alpha }={\beta }=1\)), the HCS reduces to equilibrium and Eqs. (23) yield \(\theta =1\), as expected. Moreover, from Eqs. (B15 and B16), one gets \(\mathsf {L}=0\), implying that the cumulants (25) vanish in that case. Plot of the cumulants a \(a_{20}^{(0)}\), b \(a_{02}^{(0)}\), c \(a_{11}^{(0)}\), and d \(a_{00}^{(1)}\) as functions of \({\beta }\) for \(\kappa =\frac{2}{5}\) and \(\alpha = 0.4\) (—), 0.6 (–\(\cdot \)–), 0.8 (– –), and 1 (\(\cdots \)). The vertical lines denote the asymptotes at \({\beta }={\beta }_0\). Apart from the IMM and the equilibrium Pidduck gas, in the general case with \({\alpha }<1\) and \(\mid {\beta }\mid <1\), the stationary values of the cumulants are given, as said above, by \(\left[ a_{20}^{(0)},a_{02}^{(0)},a_{11}^{(0)},a_{00}^{(1)}\right] ^\dagger =-\mathsf {M}^{-1}\cdot \mathsf {L}\), provided that \({\beta }>{\beta }_0\). The \({\beta }\)-dependence of the four cumulants are displayed in Fig. 4 for the same choices of \({\alpha }\) as in Figs. 2 and 3a. The positivity of \(a_{11}^{(0)}\) implies that high translational velocities are strongly correlated to high angular velocities [80,81,82,83]. In turn, the negative values of \(a_{00}^{(1)}\) mean that quasinormal orientations between the translational and angular velocity vectors tend to be favored against quasiparallel orientations ("lifted-tennis-ball" effect) [76,77,78, 80,81,82,83]. In this work, we have worked out a simple Maxwell model (IRMM) by keeping the collision rules of inelastic rough hard spheres but, on the other hand, assuming an effective mean-field collision rate independent of the relative velocity of the colliding pair. The latter assumption allows one to express a collisional moment of degree k as a bilinear combination of velocity moments of degrees \(i\le k\) and \(j\le k\) with \(i+j\), as happens in the well-known IMM (smooth particles) [42,43,44]. Nevertheless, the derivation of exact results in the IRMM is much more complicated than in the IMM. Not only are the translational and rotational degrees of freedom entangled, thus increasing the number and structure of the moments, but also the coefficients in the bilinear combinations are functions of the coefficient of normal restitution (as in the IMM) and, additionally, of the coefficient of tangential restitution and the reduced moment of inertia. Specifically, we have considered the moments of first and second degree, the moments of third degree related to the heat flux, and the isotropic fourth-degree moments. The structure of the associated collisional moments is displayed in Table 1. The number of coefficients is 60, since in some cases a common coefficient factorizes more than one product of moments. The exact expressions of those 60 coefficients in terms of the two coefficients of restitution (\({\alpha }\) and \({\beta }\)) and of the reduced moment of inertia (\(\kappa \)) are presented in Appendix A. We have checked that the coefficients satisfy some consistency tests. First, they reduce to the expressions reported in the literature in the smooth limit (\({\beta }=-1\)) [42,43,44]. Next, when particularized to elastic and perfectly rough particles (Pidduck model, \({\alpha }={\beta }=1\)), the coefficients obey a number of relations needed to allow the equilibrium VDF to be an exact solution of the model. The knowledge of the collisional moments stemming from the IRMM is not constrained to a specific physical situation and, thus, it can be exploited in several applications. Here we have applied the results to the HCS. Specifically, the rotational-to-translational temperature ratio has been found [see Eqs. (23)] and the linear evolution equations for the fourth-degree cumulants have been obtained [see Eq. (26)]. Interestingly, we have found that those cumulants diverge in time if, at a given \({\alpha }\), the coefficient of tangential restitution lies below a threshold value \({\beta }_0({\alpha })\) [see Fig. 3]. This reflects the existence of an algebraic high-velocity tail of the VDF, reminiscent of the case of the IMM [34,35,36,37,38, 41]. If \({\beta }>{\beta }_0({\alpha })\), the exact stationary cumulants have been obtained (see Fig. 4). In general, the departure of the HCS VDF from the equilibrium one is rather strong: (a) the (marginal) translational and angular VDFs exhibit large excess kurtoses, (b) high translational velocities are correlated to high angular velocities, and (c) the translational and angular velocities tend to adopt quasi-normal orientations ("lifted-tennis-ball" effect). It must be stressed that, in analogy to the relationship between the IHSM and the IMM, the IRMM is proposed here as a mathematical model and not as an approximation of the IRHSM. In this respect, the predictions obtained analytically from the IRMM do not need to be interpreted strictly as a quantitative replacement for the results obtained either numerically or by simulations from the IRHSM. On the other hand, the IRMM provides a tractable toy model that can be used to cleanly unveil nontrivial physical properties, guide the construction of approximations to the IRHSM, and serve as a benchmark for numerical or simulation approaches. Notwithstanding this, a closer contact with the IRHSM can be provided by the augmented IRMM defined by Eq. (20). Relying on the set of collisional moments displayed in Table 1 with coefficients given in Appendix A, further applications of the model are envisioned. In particular, the Navier–Stokes transport coefficients can be exactly derived. Preliminary results show the existence of a new "spin" viscosity coefficient, which was overlooked in previous Sonine approximations of the IRHSM [84, 88, 89]. In addition, the exact non-Newtonian rheological properties of a granular gas under simple shear flow can also be obtained. The results of both applications will be published elsewhere. The datasets employed to generate Figs. 2, 3, 4 are available from the corresponding author on reasonable request. Brilliantov, N.V., Pöschel, T.: Kinetic Theory of Granular Gases. Oxford University Press, Oxford (2004) Garzó, V.: Granular Gaseous Flows. A Kinetic Theory Approach to Granular Gaseous Flows. Springer, Cham (2019) Kremer, G.M.: An Introduction to the Boltzmann Equation and Transport Processes in Gases. Springer, Berlin (2010) Maxwell, J.C.: IV. On the dynamical theory of gases. Philos. Trans. R. Soc. (London) 157, 49–88 (1867). https://doi.org/10.1098/rstl.1867.0004 Truesdell, C., Muncaster, R.G.: Fundamentals of Maxwell's Kinetic Theory of a Simple Monatomic Gas. Academic Press, New York (1980) Ernst, M.H.: Nonlinear model-Boltzmann equations and exact solutions. Phys. Rep. 78, 1–171 (1981). https://doi.org/10.1016/0370-1573(81)90002-8 Article ADS MathSciNet Google Scholar Garzó, V., Santos, A.: Kinetic Theory of Gases in Shear Flows: Nonlinear Transport. Fundamental Theories of Physics. Springer, Dordrecht (2003) Santos, A.: Solutions of the moment hierarchy in the kinetic theory of Maxwell models. Cont. Mech. Thermodyn. 21, 361–387 (2009). https://doi.org/10.1007/s00161-009-0113-5 Bobylev, A.V., Carrillo, J.A., Gamba, I.M.: On some properties of kinetic and hydrodynamic equations for inelastic interactions. J. Stat. Phys. 98, 743–773 (2000). https://doi.org/10.1023/A:1018627625800 Carrillo, J.A., Cercignani, C., Gamba, I.M.: Steady states of a Boltzmann equation for driven granular media. Phys. Rev. E 62, 7700–7707 (2000). https://doi.org/10.1103/PhysRevE.62.7700 Ben-Naim, E., Krapivsky, P.L.: Multiscaling in inelastic collisions. Phys. Rev. E 61, R5–R8 (2000). https://doi.org/10.1103/PhysRevE.61.R5 Cercignani, C.: Shear flow of a granular material. J. Stat. Phys. 102, 1407–1415 (2001). https://doi.org/10.1023/A:1004804815471 Ben-Naim, E., Krapivsky, P.L.: In Granular Gas Dynamics, Lecture Notes. In: Pöschel, T., Luding, S. (eds.) Physics, vol. 624, pp. 65–94. Springer, Berlin (2003) Bobylev, A.V., Cercignani, C.: Self-similar asymptotics for the Boltzmann equation with inelastic and elastic interactions. J. Stat. Phys. 110, 333–375 (2003). https://doi.org/10.1023/A:1021031031038 Bobylev, A.V., Cercignani, C., Toscani, G.: Proof of an asymptotic property of self-similar solutions of the Boltzmann equation for granular materials. J. Stat. Phys. 111, 403–417 (2003). https://doi.org/10.1023/A:1022273528296 Santos, A., Ernst, M.H.: Exact steady-state solution of the Boltzmann equation: a driven one-dimensional inelastic Maxwell gas. Phys. Rev. E 68, 011305 (2003). https://doi.org/10.1103/PhysRevE.68.011305 Garzó, V.: Nonlinear transport in inelastic Maxwell mixtures under simple shear flow. J. Stat. Phys. 112, 657–683 (2003). https://doi.org/10.1023/A:1023828109434 Brito, R., Ernst, M:. In: Korutcheva, E., Cuerno, R. (eds.) Advances in Condensed Matter and Statistical Mechanics, pp. 177–202. Nova Science Publishers, New York (2004) Garzó, V., Astillero, A.: Transport coefficients for inelastic Maxwell mixtures. J. Stat. Phys. 118, 935–971 (2005). https://doi.org/10.1007/s10955-004-2006-0 Article ADS MathSciNet MATH Google Scholar Bisi, M., Carrillo, J.A., Toscani, G.: Decay rates in probability metrics towards homogeneous cooling states for the inelastic Maxwell model. J. Stat. Phys. 124, 625–653 (2006). https://doi.org/10.1007/s10955-006-9035-9 Bobylev, A.V., Gamba, I.M.: Boltzmann equations for mixtures of Maxwell gases: exact solutions and power like tails. J. Stat. Phys. 124, 497–516 (2006). https://doi.org/10.1007/s10955-006-9044-8 Bolley, F., Carrillo, J.A.: Tanaka theorem for inelastic Maxwell models. Commun. Math. Phys. 276, 287–314 (2007). https://doi.org/10.1007/s00220-007-0336-x Carrillo, J.A., Toscani, G.: Contractive probability metrics and asymptotic behavior of dissipative kinetic equations. Riv. Mat. Univ. Parma 7(6), 75–198 (2007) MathSciNet MATH Google Scholar Garzó, V.: Shear-rate dependent transport coefficients for inelastic Maxwell models. J. Phys. A: Math. Theor. 40, 10729–10767 (2007). https://doi.org/10.1088/1751-8113/40/35/002 Bobylev, A.V., Cercignani, C., Gamba, I.M.: Generalized Kinetic Maxwell Mmodels of Granular Gases, Lecture Notes in Mathematics, vol. 1937, pp. 23–58. Springer, Berlin (2008) Bobylev, A.V., Cercignani, C., Gamba, I.M.: On the self-similar asymptotics for generalized non-linear kinetic Maxwell models. Commun. Math. Phys. 291, 599–644 (2009). https://doi.org/10.1007/s00220-009-0876-3 Article ADS MATH Google Scholar Carlen, E.A., Carrillo, J.A., Carvalho, M.C.: Strong convergence towards homogeneous cooling states for dissipative Maxwell models. Ann. I. H. Poincaré Anal Non-linéaire 26, 167–1700 (2009). https://doi.org/10.1016/j.anihpc.2008.10.005 Garzó, V., Trizac, E.: Rheological properties for inelastic Maxwell mixtures under shear flow. J. Non-Newton. Fluid Mech. 165, 932–940 (2010). https://doi.org/10.1016/j.jnnfm.2010.01.016 Furioli, G., Pulvirenti, A., Terraneo, E., Toscani, G.: Convergence to self-similarity for the Boltzmann equation for strongly inelastic Maxwell molecules. Ann. I. H. Poincaré Anal Non-linéaire 27, 719–737 (2010). https://doi.org/10.1016/j.anihpc.2009.11.005 Brey, J.J., de Soria, M.I.G., Maynar, P.: Breakdown of hydrodynamics in the inelastic Maxwell model of granular gases. Phys. Rev. E 82, 021303 (2010). https://doi.org/10.1103/PhysRevE.82.021303 Khalil, N., Garzó, V., Santos, A.: Hydrodynamic Burnett equations for inelastic Maxwell models of granular gases. Phys. Rev. E 89, 052201 (2014). https://doi.org/10.1103/PhysRevE.89.052201 Gómez González, R., Garzó, V.: Simple shear flow in granular suspensiones: inelastic Maxwell models and BGK-type kinetic model. J. Stat. Mech. (2019). https://doi.org/10.1088/1742-5468/aaf719 Khalil, N., Garzó, V.: Unified hydrodynamic description for driven and undriven inelastic Maxwell mixtures at low density. J. Phys. A: Math. Theor. 53, 355002 (2020). https://doi.org/10.1088/1751-8121/ab9f72 Baldassarri, A., Marconi, U.M.B., Puglisi, A.: Influence of correlations on the velocity statistics of scalar granular gases. Europhys. Lett. 58, 14–20 (2002). https://doi.org/10.1209/epl/i2002-00600-6 Ben-Naim, E., Krapivsky, P.L.: Scaling, multiscaling, and nontrivial exponents in inelastic collision processes. Phys. Rev. E 66, 011309 (2002). https://doi.org/10.1103/PhysRevE.66.011309 Krapivsky, P.L., Ben-Naim, E.: Nontrivial velocity distributions in inelastic gases. J. Phys. A: Math. Gen. 35, L147–L152 (2002). https://doi.org/10.1088/0305-4470/35/11/103 Ernst, M.H., Brito, R.: High-energy tails for inelastic Maxwell models. Europhys. Lett. 58, 182–187 (2002). https://doi.org/10.1209/epl/i2002-00622-0 Ernst, M.H., Brito, R.: Scaling solutions of inelastic Boltzmann equations with over-populated high energy tails. J. Stat. Phys. 109, 407–432 (2002). https://doi.org/10.1023/A:1020437925931 Santos, A.: Transport coefficients of \(d\)-dimensional inelastic Maxwell models. Physica A 321, 442–466 (2003). https://doi.org/10.1016/S0378-4371(02)01005-1 Santos, A., Garzó, V.: Simple shear flow in inelastic Maxwell models. J. Stat. Mech. (2007). https://doi.org/10.1088/1742-5468/2007/08/P08021 Ernst, M.H., Brito, R.: Driven inelastic Maxwell models with high energy tails. Phys. Rev. E 65, 040301 (2002). https://doi.org/10.1103/PhysRevE.65.040301 Garzó, V., Santos, A.: Third and fourth degree collisional moments for inelastic Maxwell model. J. Phys. A: Math. Theor. 40, 14927–14943 (2007). https://doi.org/10.1088/1751-8113/40/50/002 Garzó, V., Santos, A.: Hydrodynamics of inelastic Maxwell models. Math. Model. Nat. Phenom. 6(4), 37–76 (2011). https://doi.org/10.1051/mmnp/20116403 Santos, A., Garzó, V.: Collisional rates for the inelastic Maxwell model. Application to the divergence of anisotropic high-order velocity moments in the homogeneous cooling state. Granul. Matter 14, 105–110 (2012). https://doi.org/10.1007/s10035-012-0336-1 Goldhirsch, I.: Rapid granular flows. Annu. Rev. Fluid Mech. 35, 267–293 (2003). https://doi.org/10.1146/annurev.fluid.35.101101.161114 Jenkins, J.T., Richman, M.W.: Kinetic theory for plane flows of a dense gas of identical, rough, inelastic, circular disks. Phys. Fluids 28, 3485–3494 (1985). https://doi.org/10.1063/1.865302 Lun, C.K.K., Savage, S.B.: A simple kinetic theory for granular flow of rough, inelastic, spherical particles. J. Appl. Mech. 54, 47–53 (1987). https://doi.org/10.1115/1.3172993 Campbell, C.S.: The stress tensor for simple shear flows of a granular material. J. Fluid Mech. 203, 449–473 (1989). https://doi.org/10.1017/S0022112089001540 Lun, C.K.K.: Kinetic theory for granular flow of dense, slightly inelastic, slightly rough spheres. J. Fluid Mech. 233, 539–559 (1991). https://doi.org/10.1017/S0022112091000599 Lun, C.K.K., Bent, A.A.: Numerical simulation of inelastic frictional spheres in simple shear flow. J. Fluid Mech. 258, 335–353 (1994). https://doi.org/10.1017/S0022112094003356 Goldshtein, A., Shapiro, M.: Mechanics of collisional motion of granular materials. Part 1. General hydrodynamic equations. J. Fluid Mech. 282, 75–114 (1995). https://doi.org/10.1017/S0022112095000048 Luding, S.: Granular materials under vibration: simulations of rotating spheres. Phys. Rev. E 52, 4442–4457 (1995). https://doi.org/10.1103/PhysRevE.52.4442 Lun, C.K.K.: Granular dynamics of inelastic spheres in Couette flow. Phys. Fluids 8, 2868–2883 (1996). https://doi.org/10.1063/1.869068 Zamankhan, P., Tafreshi, H.V., Polashenski, W., Sarkomaa, P., Hyndman, C.L.: Shear induced diffusive mixing in simulations of dense Couette flow of rough, inelastic hard spheres. J. Chem. Phys. 109, 4487–4491 (1998). https://doi.org/10.1063/1.477076 McNamara, S., Luding, S.: Energy nonequipartition in systems of inelastic, rough spheres. Phys. Rev. E 58, 2247–2250 (1998). https://doi.org/10.1103/PhysRevE.58.2247 Luding, S., Huthmann, M., McNamara, S., Zippelius, A.: Homogeneous cooling of rough, dissipative particles: theory and simulations. Phys. Rev. E 58, 3416–3425 (1998). https://doi.org/10.1103/PhysRevE.58.3416 Herbst, O., Huthmann, M., Zippelius, A.: Dynamics of inelastically colliding spheres with Coulomb friction: relaxation of translational and rotational energy. Granul. Matter 2, 211–219 (2000). https://doi.org/10.1007/PL00010915 Aspelmeier, T., Huthmann, M., Zippelius, A.: In: Pöschel, T., Luding, S. (eds.) Granular Gases, Lectures Notes In Physics, vol. 564, pp. 31–58. Springer, Berlin (2001) Jenkins, J.T., Zhang, C.: Kinetic theory for identical, frictional, nearly elastic spheres. Phys. Fluids 14, 1228–1235 (2002). https://doi.org/10.1063/1.1449466 Polashenski, W., Zamankhan, P., Mäkiharju, S., Zamankhan, P.: Fine structures in sheared granular flows. Phys. Rev. E 66, 021303 (2002). https://doi.org/10.1103/PhysRevE.66.021303 Cafiero, R., Luding, S., Herrmann, H.J.: Rotationally driven gas of inelastic rough spheres. Europhys. Lett. 60, 854–860 (2002). https://doi.org/10.1209/epl/i2002-00295-7 Mitarai, N., Hayakawa, H., Nakanishi, H.: Collisional granular flow as a micropolar fluid. Phys. Rev. Lett. 88, 174301 (2002). https://doi.org/10.1103/PhysRevLett.88.174301 Viot, P., Talbot, J.: Thermalization of an anisotropic granular particle. Phys. Rev. E 69, 051106 (2004). https://doi.org/10.1103/PhysRevE.69.051106 Goldhirsch, I., Noskowicz, S.H., Bar-Lev, O.: Nearly smooth granular gases. Phys. Rev. Lett. 95, 068002 (2005). https://doi.org/10.1103/PhysRevLett.95.068002 Goldhirsch, I., Noskowicz, S.H., Bar-Lev, O.: Hydrodynamics of nearly smooth granular gases. J. Phys. Chem. B 109, 21449–21470 (2005). https://doi.org/10.1021/jp0532667 Zippelius, A.: Granular gases. Physica A 369, 143–158 (2006). https://doi.org/10.1016/j.physa.2006.04.012 Piasecki, J., Talbot, J., Viot, P.: Angular velocity distribution of a granular planar rotator in a thermalized bath. Phys. Rev. E 75, 051307 (2007). https://doi.org/10.1103/PhysRevE.75.051307 Cornu, F., Piasecki, J.: Granular rough sphere in a low-density thermal bath. Physica A 387, 4856–4862 (2008). https://doi.org/10.1016/j.physa.2008.03.014 Santos, A.: Homogeneous free cooling state in binary granular fluids of inelastic rough hard spheres. AIP Conf. Proc. 1333, 128–133 (2011). https://doi.org/10.1063/1.3562637 Vega Reyes, F., Lasanta, A., Santos, A., Garzó, V.: Thermal properties of an impurity immersed in a granular gas of rough hard spheres. EPJ Web Conf. 140, 04003 (2017). https://doi.org/10.1051/epjconf/201714004003 Vega Reyes, F., Lasanta, A., Santos, A., Garzó, V.: Energy nonequipartition in gas mixtures of inelastic rough hard spheres: the tracer limit. Phys. Rev. E 96, 052901 (2017). https://doi.org/10.1103/PhysRevE.96.052901. Erratum: 100, 049901 (2019) Garzó, V., Santos, A., Kremer, G.M.: Impact of roughness on the instability of a free-cooling granular gas. Phys. Rev. E 97, 052901 (2018). https://doi.org/10.1103/PhysRevE.97.052901 Torrente, A., López-Castaño, M.A., Lasanta, A., Vega Reyes, F., Prados, A., Santos, A.: Large Mpemba-like effect in a gas of inelastic rough hard spheres. Phys. Rev. E 99, 060901 (2019). https://doi.org/10.1103/PhysRevE.99.060901 Gómez González, R., Garzó, V.: Non-Newtonian rheology in inertial suspensions of inelastic rough hard spheres under simple shear flow. Phys. Fluids 32, 073315 (2020). https://doi.org/10.1063/5.0015241 Huthmann, M., Zippelius, A.: Dynamics of inelastically colliding rough spheres: relaxation of translational and rotational energy. Phys. Rev. E 56, R6275–R6278 (1997). https://doi.org/10.1103/PhysRevE.56.R6275 Brilliantov, N.V., Pöschel, T., Kranz, W.T., Zippelius, A.: Translations and rotations are correlated in granular gases. Phys. Rev. Lett. 98, 128001 (2007). https://doi.org/10.1103/PhysRevLett.98.128001 Kranz, W.T., Brilliantov, N.V., Pöschel, T., Zippelius, A.: Correlation of spin and velocity in the homogeneous cooling state of a granular gas of rough particles. Eur. Phys. J. Spec. Top. 179, 91–111 (2009). https://doi.org/10.1140/epjst/e2010-01196-0 Rongali, R., Alam, M.: Higher-order effects on orientational correlation and relaxation dynamics in homogeneous cooling of a rough granular gas. Phys. Rev. E 89, 062201 (2014). https://doi.org/10.1103/PhysRevE.89.062201 Santos, A., Kremer, G.M., Garzó, V.: Energy production rates in fluid mixtures of inelastic rough hard spheres. Prog. Theor. Phys. Suppl. 184, 31–48 (2010). https://doi.org/10.1143/PTPS.184.31 Santos, A., Kremer, G.M., dos Santos, M.: Sonine approximation for collisional moments of granular gases of inelastic rough spheres. Phys. Fluids 23, 030604 (2011). https://doi.org/10.1063/1.3558876 Vega Reyes, F., Santos, A., Kremer, G.M.: Role of roughness on the hydrodynamic homogeneous base state of inelastic spheres. Phys. Rev. E 89, 020202 (2014). https://doi.org/10.1103/PhysRevE.89.020202 Vega Reyes, F., Santos, A., Kremer, G.M.: Properties of the homogeneous cooling state of a gas of inelastic rough particles. AIP Conf. Proc. 1628, 494–501 (2014). https://doi.org/10.1063/1.4902634 Vega-Reyes, F., Santos, A.: Steady state in a gas of inelastic rough spheres heated by a uniform stochastic force. Phys. Fluids 27, 113301 (2015). https://doi.org/10.1063/1.4934727 Kremer, G.M., Santos, A., Garzó, V.: Transport coefficients of a granular gas of inelastic rough hard spheres. Phys. Rev. E 90, 022205 (2014). https://doi.org/10.1103/PhysRevE.90.022205 Santos, A.: Interplay between polydispersity, inelasticity, and roughness in the freely cooling regime of hard-disk granular gases. Phys. Rev. E 98, 012904 (2018). https://doi.org/10.1103/PhysRevE.98.012904 Megías, A., Santos, A.: Driven and undriven states of multicomponent granular gases of inelastic and rough hard disks or spheres. Granul. Matter 21, 49 (2019). https://doi.org/10.1007/s10035-019-0901-y Megías, A., Santos, A.: Energy production rates of multicomponent granular gases of rough particles. A unified view of hard-disk and hard-sphere systems. AIP Conf. Proc. 2132, 080003 (2019). https://doi.org/10.1063/1.5119584 Megías, A., Santos, A.: Hydrodynamics of granular gases of inelastic and rough hard disks or spheres. I. Transport coefficients. Phys. Rev. E 104, 034901 (2021). https://doi.org/10.1103/PhysRevE.104.034901 Megías, A., Santos, A.: Hydrodynamics of granular gases of inelastic and rough hard disks or spheres. II. Stability analysis. Phys. Rev. E 104, 034902 (2021). https://doi.org/10.1103/PhysRevE.104.034902 Pidduck, F.B.: The kinetic theory of a special type of rigid molecule. Proc. R. Soc. Lond. A 101, 101–112 (1922). https://doi.org/10.1098/rspa.1922.0028 For an interactive animation, see A. Santos, "Inelastic Collisions of Two Rough Spheres". Wolfram Demonstrations Project. https://www.demonstrations.wolfram.com/InelasticCollisionsOfTwoRoughSpheres/ (2010) Cercignani, C.: The Boltzmann Equation and Its Applications. Springer, New York (1988) Santos, A.: A Bhatnagar-Gross-Krook-like model kinetic equation for a granular gas of inelastic rough hard spheres. AIP Conf. Proc. 1333, 41–48 (2011). https://doi.org/10.1063/1.3562623 A.S. acknowledges financial support from Grant PID2020-112936GB-I00 funded by MCIN/AEI/10.13039/501100011033, and from grants IB20079 and GR21014 funded by Junta de Extremadura (Spain) and by ERDF "A way of making Europe." G.M.K. is grateful to the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) for financial support through Grant No. 304054/2019-4. Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Gilberto M. Kremer and Andrés Santos have contributed equally to this work. Departamento de Física, Universidade Federal do Paraná, Curitiba, Brazil Gilberto M. Kremer Departamento de Física and Instituto de Computación Científica Avanzada (ICCAEx), Universidad de Extremadura, 06006, Badajoz, Spain Andrés Santos Correspondence to Andrés Santos. Communicated by Eric A. Carlen. Appendix A: Explicit Expressions for the Coefficients Appearing in Table 1 The 60 coefficients appearing in Table 1 are given by $$\begin{aligned} \varphi _{01\mid 01}=&\frac{4\widetilde{\beta }}{3\kappa }, \end{aligned}$$ $$\begin{aligned} \chi _{20\mid 20}=&\frac{2}{3}\left[ \widetilde{\alpha }\left( 1-\widetilde{\alpha }\right) +2\widetilde{\beta }\left( 1-\widetilde{\beta }\right) \right] ,\quad \chi _{20\mid 02}=-\frac{\widetilde{\beta }^2}{3}, \end{aligned}$$ $$\begin{aligned} \psi _{20\mid 20}=&\frac{2}{15}\left( 5\widetilde{\alpha }-2\widetilde{\alpha }^2-6\widetilde{\alpha }\widetilde{\beta }+10\widetilde{\beta }-7\widetilde{\beta }^2\right) ,\quad \psi _{20\mid 02}=\frac{\widetilde{\beta }^2}{6}, \end{aligned}$$ $$\begin{aligned} \chi _{02\mid 02}=&\frac{4\widetilde{\beta }}{3\kappa }\left( 1-\frac{\widetilde{\beta }}{\kappa }\right) ,\quad \chi _{02\mid 20}=-\frac{16\widetilde{\beta }^2}{3\kappa ^2}, \end{aligned}$$ $$\begin{aligned} \psi _{02\mid 02}=&\frac{2\widetilde{\beta }}{15\kappa }\left( 10-\frac{7\widetilde{\beta }}{\kappa }\right) , \quad \psi _{02\mid 20}=\frac{8\widetilde{\beta }^2}{3\kappa ^2}, \end{aligned}$$ $$\begin{aligned} \psi _{11\mid 11}=&\frac{1}{3}\left( \widetilde{\alpha }+2\widetilde{\beta }\frac{1+\kappa }{\kappa }\right) , \end{aligned}$$ $$\begin{aligned} \varphi _{30\mid 30}=&\frac{1}{15}\left( 15\widetilde{\alpha }-11\widetilde{\alpha }^2-8\widetilde{\alpha }\widetilde{\beta }+30\widetilde{\beta }-26\widetilde{\beta }^2\right) , \quad \varphi _{30\mid 12}=-\frac{\widetilde{\beta }^2}{6}, \end{aligned}$$ $$\begin{aligned} \varphi _{12\mid 30}=&-\frac{8\widetilde{\beta }^2}{3\kappa ^2},\quad \varphi _{12\mid 12}^{(1)}=\frac{1}{15}\left[ 5\widetilde{\alpha }+10\widetilde{\beta }+\frac{2\widetilde{\beta }}{\kappa }\left( 10-4\widetilde{\alpha }-11\widetilde{\beta }-\frac{5\widetilde{\beta }}{\kappa }\right) \right] , \end{aligned}$$ (A8a) $$\begin{aligned} \varphi _{12\mid 12}^{(2)}=&\frac{2\widetilde{\beta }}{15\kappa }\left( 2\widetilde{\alpha }+3\widetilde{\beta }\right) ,\quad \varphi _{12\mid 12}^{(3)}=\frac{4\widetilde{\beta }}{3\kappa }\left( 1-\frac{\widetilde{\beta }}{\kappa }\right) ,\quad \varphi _{12\mid 12}^{(4)}=\frac{2\widetilde{\beta }^2}{3\kappa }, \end{aligned}$$ (A8b) $$\begin{aligned} \overline{\varphi }_{12\mid 12}^{(1)}=&\frac{1}{15}\left[ 5\widetilde{\alpha }+10\widetilde{\beta }+\frac{\widetilde{\beta }}{\kappa }\left( 20-3\widetilde{\alpha }-7\widetilde{\beta }\frac{1+\kappa }{\kappa }\right) \right] , \end{aligned}$$ $$\begin{aligned} \overline{\varphi }_{12\mid 12}^{(2)}=&\frac{\widetilde{\beta }}{15\kappa }\left( \widetilde{\alpha }-\widetilde{\beta }\frac{1+\kappa }{\kappa }\right) ,\quad \overline{\varphi }_{12\mid 12}^{(3)}=\frac{\widetilde{\beta }}{15\kappa }\left( 10+5\widetilde{\alpha }+15\widetilde{\beta }-\frac{7\widetilde{\beta }}{\kappa }\right) , \end{aligned}$$ $$\begin{aligned} \overline{\varphi }_{12\mid 12}^{(4)}=&-\frac{2\widetilde{\beta }^2}{15\kappa ^2},\quad \overline{\varphi }_{12\mid 12}^{(5)}=\frac{\widetilde{\beta }}{15\kappa }\left( 10-5\widetilde{\alpha }-15\widetilde{\beta }-\frac{7\widetilde{\beta }}{\kappa }\right) \end{aligned}$$ (A9c) $$\begin{aligned} \chi _{40\mid 40}^{(1)}=&\frac{2}{15}\Big [10\widetilde{\alpha }-11\widetilde{\alpha }^2+6\widetilde{\alpha }^3-3\widetilde{\alpha }^4+20\widetilde{\beta }-26\widetilde{\beta }^2+16\widetilde{\beta }^3-8\widetilde{\beta }^4\nonumber \\&-4\widetilde{\alpha }{\beta }(2-{\alpha }-{\beta }+{\alpha }{\beta })\Big ], \end{aligned}$$ (A10a) $$\begin{aligned} \chi _{40\mid 40}^{(2)}=&-\frac{2}{15}\Big [7\widetilde{\alpha }^2-6\widetilde{\alpha }^3+3\widetilde{\alpha }^4+12\widetilde{\beta }^2-16\widetilde{\beta }^3+8\widetilde{\beta }^4-4\widetilde{\alpha }{\beta }(1+{\alpha }+{\beta }-{\alpha }{\beta })\Big ], \end{aligned}$$ (A10b) $$\begin{aligned} \chi _{40\mid 40}^{(3)}=&-\frac{4}{15}\Big [2\widetilde{\alpha }^2-6\widetilde{\alpha }^3+3\widetilde{\alpha }^4+7\widetilde{\beta }^2-16\widetilde{\beta }^3+8\widetilde{\beta }^4+2{\alpha }{\beta }(3-2{\alpha }-2{\beta }+2{\alpha }{\beta })], \end{aligned}$$ (A10c) $$\begin{aligned} \chi _{40\mid 04}=&-\frac{\widetilde{\beta }^4}{15},\quad \chi _{40\mid 22}^{(1)}=-\frac{\widetilde{\beta }^2}{15}\left[ 5-2\widetilde{\alpha }\left( 1-\widetilde{\alpha }\right) -8\widetilde{\beta }\left( 1-\widetilde{\beta }\right) \right] , \end{aligned}$$ (A10d) $$\begin{aligned} \chi _{40\mid 22}^{(2)}=&\frac{2\widetilde{\beta }^2}{15}\left[ \widetilde{\alpha }\left( 1-\widetilde{\alpha }\right) +4\widetilde{\beta }\left( 1-\widetilde{\beta }\right) \right] , \end{aligned}$$ (A10e) $$\begin{aligned} \chi _{04\mid 40}=&-\frac{256\widetilde{\beta }^4}{15\kappa ^4},\quad \chi _{04\mid 22}^{(1)}=-\frac{16\widetilde{\beta }^2}{15\kappa ^2}\left( 5-\frac{8\widetilde{\beta }}{\kappa }+\frac{8\widetilde{\beta }^2}{\kappa ^2}\right) , \end{aligned}$$ $$\begin{aligned} \chi _{04\mid 22}^{(2)}=&-\frac{128\widetilde{\beta }^3}{15\kappa ^3}\left( 1-\frac{\widetilde{\beta }}{\kappa }\right) ,\quad \chi _{04\mid 04}^{(1)}=\frac{4\widetilde{\beta }}{15\kappa }\left( 10-\frac{13\widetilde{\beta }}{\kappa }+\frac{8\widetilde{\beta }^2}{\kappa ^2}-\frac{4\widetilde{\beta }^3}{\kappa ^3}\right) , \end{aligned}$$ $$\begin{aligned} \chi _{04\mid 04}^{(2)}=&-\frac{8\widetilde{\beta }^2}{15\kappa ^2}\left( 3-\frac{4\widetilde{\beta }}{\kappa }+\frac{2\widetilde{\beta }^2}{\kappa ^2}\right) , \quad \chi _{04\mid 04}^{(3)}=-\frac{4\widetilde{\beta }^2}{15\kappa ^2}\left( 7-\frac{16\widetilde{\beta }}{\kappa }+\frac{8\widetilde{\beta }^2}{\kappa ^2}\right) , \end{aligned}$$ $$\begin{aligned} \chi _{04\mid 04}^{(4)}=&\frac{8\widetilde{\beta }}{15\kappa }\left( 1-\frac{{\beta }}{\kappa }\right) \left( 5-\frac{8\widetilde{\beta }}{\kappa }+\frac{8\widetilde{\beta }^2}{\kappa ^2}\right) , \end{aligned}$$ $$\begin{aligned} \chi _{22\mid 40}^{(1)} =&-\frac{8\widetilde{\beta }^2}{15\kappa ^2}\left[ 5-2\widetilde{\alpha }\left( 1-\widetilde{\alpha }\right) -8\widetilde{\beta }\left( 1-\widetilde{\beta }\right) \right] , \end{aligned}$$ $$\begin{aligned} \chi _{22\mid 40}^{(2)}=&\frac{32\widetilde{\beta }^2}{15\kappa ^2}\left[ \widetilde{\alpha }\left( 1-\widetilde{\alpha }\right) +4\widetilde{\beta }\left( 1-\widetilde{\beta }\right) \right] , \end{aligned}$$ $$\begin{aligned} \chi _{22\mid 22}^{(1)}=&\frac{1}{15}\left[ 5\widetilde{\alpha }(2-\widetilde{\alpha })+20\widetilde{\beta }\frac{1+\kappa }{\kappa }-2\widetilde{\beta }^2\frac{5+22\kappa +5\kappa ^2}{\kappa ^2}-\frac{8\widetilde{\alpha }(2-\widetilde{\alpha })\widetilde{\beta }}{\kappa } \right. \nonumber \\&\left. +\frac{8\widetilde{\alpha }(1-\widetilde{\alpha })\widetilde{\beta }^2}{\kappa ^2}+32\widetilde{\beta }^3\frac{1+\kappa }{\kappa ^2}-\frac{64\widetilde{\beta }^4}{\kappa ^2}\right] , \end{aligned}$$ $$\begin{aligned} \chi _{22\mid 22}^{(2)}=&-\frac{1}{15}\left[ 5\widetilde{\alpha }^2+10\widetilde{\beta }^2\frac{1+\kappa ^2}{\kappa ^2}-\frac{8\widetilde{\alpha }(1-\widetilde{\alpha })\widetilde{\beta }^2}{\kappa ^2}-\frac{8\widetilde{\alpha }^2\widetilde{\beta }}{\kappa } -32\widetilde{\beta }^3\frac{1+\kappa }{\kappa ^2}+\frac{64\widetilde{\beta }^4}{\kappa ^2}\right] , \end{aligned}$$ $$\begin{aligned} \chi _{22\mid 22}^{(3)}=&\frac{4\widetilde{\beta }}{15\kappa }\left[ \widetilde{\alpha }(2-\widetilde{\alpha })+3\widetilde{\beta }-4\widetilde{\beta }^2\frac{1+\kappa }{\kappa }-\frac{\widetilde{\alpha }(1-\widetilde{\alpha })\widetilde{\beta }}{\kappa }+\frac{8\widetilde{\beta }^3}{\kappa }\right] , \end{aligned}$$ $$\begin{aligned} \chi _{22\mid 22}^{(4)}=&-\frac{4\widetilde{\beta }}{15\kappa }\left[ \widetilde{\alpha }^2+4\widetilde{\beta }^2\frac{1+\kappa }{\kappa }+\frac{\widetilde{\alpha }(1-\widetilde{\alpha })\widetilde{\beta }}{\kappa }-\frac{8\widetilde{\beta }^3}{\kappa }\right] , \end{aligned}$$ (A12f) $$\begin{aligned} \chi _{22\mid 22}^{(5)}=&\frac{4\widetilde{\beta }}{15\kappa }\left[ 2\widetilde{\alpha }(1-\widetilde{\alpha })+3\widetilde{\beta }-8\widetilde{\beta }^2\frac{1+\kappa }{\kappa }-\frac{2\widetilde{\alpha }(1-\widetilde{\alpha })\widetilde{\beta }}{\kappa }+\frac{16\widetilde{\beta }^3}{\kappa }\right] , \end{aligned}$$ (A12g) $$\begin{aligned} \chi _{22\mid 22}^{(6)}=&\frac{4\widetilde{\beta }}{15\kappa }\left[ 5-4\widetilde{\alpha }(1-\widetilde{\alpha })-\widetilde{\beta }\frac{5+11\kappa }{\kappa }+16\widetilde{\beta }^2\frac{1+\kappa }{\kappa } +\frac{4\widetilde{\alpha }(1-\widetilde{\alpha })\widetilde{\beta }}{\kappa } -\frac{32\widetilde{\beta }^3}{\kappa }\right] , \end{aligned}$$ (A12h) $$\begin{aligned} \chi _{22\mid 22}^{(7)}=&-\frac{4\widetilde{\beta }}{15\kappa }\left[ \widetilde{\alpha }(1-\widetilde{\alpha })+4\widetilde{\beta }-4\widetilde{\beta }^2\frac{1+\kappa }{\kappa }-\frac{\widetilde{\alpha }(1-\widetilde{\alpha })\widetilde{\beta }}{\kappa }+\frac{8\widetilde{\beta }^3}{\kappa }\right] , \end{aligned}$$ (A12i) $$\begin{aligned} \chi _{22\mid 22}^{(8)}=&-\frac{4\widetilde{\beta }}{15\kappa }\left[ \widetilde{\alpha }(1-\widetilde{\alpha })-\widetilde{\beta }-4\widetilde{\beta }^2\frac{1+\kappa }{\kappa }-\frac{\widetilde{\alpha }(1-\widetilde{\alpha })\widetilde{\beta }}{\kappa }+\frac{8\widetilde{\beta }^3}{\kappa }\right] , \end{aligned}$$ (A12j) $$\begin{aligned} \chi _{22\mid 22}^{(9)}=&\frac{4\widetilde{\beta }}{15\kappa }\left[ 4\widetilde{\alpha }(1-\widetilde{\alpha })+11\widetilde{\beta }-16\widetilde{\beta }^2\frac{1+\kappa }{\kappa }-\frac{4\widetilde{\alpha }(1-\widetilde{\alpha })\widetilde{\beta }}{\kappa }+\frac{32\widetilde{\beta }^3}{\kappa }\right] , \end{aligned}$$ (A12k) $$\begin{aligned} \chi _{22\mid 04}^{(1)}=&-\frac{\widetilde{\beta }^2}{30}\left( 5-\frac{8\widetilde{\beta }}{\kappa }+\frac{8\widetilde{\beta }^2}{\kappa ^2}\right) ,\quad \chi _{22\mid 04}^{(2)}=\frac{8\widetilde{\beta }^3}{15\kappa }\left( 1-\frac{\widetilde{\beta }}{\kappa }\right) , \end{aligned}$$ (A12l) $$\begin{aligned} \chi _{22\mid 04}^{(3)}=&-\frac{\widetilde{\beta }^2}{15}\left( 5-\frac{16\widetilde{\beta }}{\kappa }+\frac{16\widetilde{\beta }^2}{\kappa ^2}\right) , \end{aligned}$$ (A12m) $$\begin{aligned} \overline{\chi }_{22\mid 40}=&-\frac{4\widetilde{\beta }^2}{3\kappa ^2}, \end{aligned}$$ $$\begin{aligned} \overline{\chi }_{22\mid 22}^{(1)}=&\frac{1}{15}\left[ 2\widetilde{\alpha }(5-\widetilde{\alpha })+2\widetilde{\beta }(10-3\widetilde{\alpha })\frac{1+\kappa }{\kappa }-7\widetilde{\beta }^2\frac{(1+\kappa )^2}{\kappa ^2}\right] , \end{aligned}$$ $$\begin{aligned} \overline{\chi }_{22\mid 22}^{(2)}=&-\frac{1}{15}\left( \widetilde{\alpha }-\widetilde{\beta }\frac{1+\kappa }{\kappa }\right) ^2,\quad \overline{\chi }_{22\mid 22}^{(3)}=-\frac{1}{15}\left[ \left( \widetilde{\alpha }-\widetilde{\beta }\right) ^2+\frac{\widetilde{\beta }^2}{\kappa ^2}\right] , \end{aligned}$$ $$\begin{aligned} \overline{\chi }_{22\mid 22}^{(4)}=&-\frac{1}{15}\left( 2\widetilde{\alpha }^2+6\widetilde{\alpha }\widetilde{\beta }+7\widetilde{\beta }^2\frac{1+\kappa ^2}{\kappa ^2}\right) , \end{aligned}$$ $$\begin{aligned} \overline{\chi }_{22\mid 22}^{(5)}=&\frac{2\widetilde{\beta }}{15\kappa }\left( 10-3\widetilde{\alpha }-7\widetilde{\beta }\frac{1+\kappa }{\kappa }\right) , \quad \overline{\chi }_{22\mid 22}^{(6)}=\frac{2\widetilde{\beta }}{15\kappa }\left( \widetilde{\alpha }-\widetilde{\beta }\frac{1+\kappa }{\kappa }\right) , \end{aligned}$$ $$\begin{aligned} \overline{\chi }_{22\mid 22}^{(7)}=&-\frac{2\widetilde{\beta }}{15\kappa }\left( \widetilde{\alpha }+4\widetilde{\beta }\right) , \quad \overline{\chi }_{22\mid 22}^{(8)}=\frac{2\widetilde{\beta }}{15\kappa }\left( 4\widetilde{\alpha }+11\widetilde{\beta }\right) , \end{aligned}$$ $$\begin{aligned} \overline{\chi }_{22\mid 22}^{(9)}=&-\frac{2\widetilde{\beta }}{15\kappa }\left( \widetilde{\alpha }-\widetilde{\beta }\right) , \quad \overline{\chi }_{22\mid 04}=-\frac{\widetilde{\beta }^2}{12}. \end{aligned}$$ Table 3 Coefficients associated with the collisional moments of first, second, and third degree in Table 1 in the special cases of (i) inelastic and perfectly smooth particles (\({\alpha }<1\), \({\beta }=-1\)) and (ii) elastic and perfectly rough particles (\({\alpha }={\beta }=1\)). Table 4 Coefficients associated with the collisional moments \(\mathcal {J}_{\text {M}}[V^4]\) and \(\mathcal {J}_{\text {M}}[\omega ^4]\) in Table 1 in the special cases of (i) inelastic and perfectly smooth particles (\({\alpha }<1\), \({\beta }=-1\)) and (ii) elastic and perfectly rough particles (\({\alpha }={\beta }=1\)). Table 5 Coefficients associated with the collisional moments \(\mathcal {J}_{\text {M}}[V^2\omega ^2]\) and \(\mathcal {J}_{\text {M}}[(\mathbf {V}\cdot \varvec{\omega })^2]\) in Table 1 in the special cases of (i) inelastic and perfectly smooth particles (\({\alpha }<1\), \({\beta }=-1\)) and (ii) elastic and perfectly rough particles (\({\alpha }={\beta }=1\)). Appendix B: Consistency Tests Tables 3, 4, 5 display the expressions of the 60 coefficients in two interesting situations: (i) inelastic and perfectly smooth particles (i.e., the IMM, \({\alpha }<1\), \({\beta }=-1\)) and (ii) elastic and perfectly rough particles (i.e., the Pidduck model, \({\alpha }={\beta }=1\)). Let us use both situations (i) and (ii) as tests of the results. 2.1. Inelastic Maxwell Model (IMM) In case (i), 45 out of the 60 coefficients vanish. First, the angular velocities are unaffected by collisions and thus \(\mathcal {J}_{\text {M}}[\Psi _{0k_2}]=0\) because \(\Psi _{0k_2}(\varvec{\xi })\) is a function of \(\varvec{\omega }\) only. This implies that the 12 coefficients of the form \(Y_{0k_2\mid \ell _1\ell _2}\) identically vanish. Also, since \(\Psi _{k_10}(\varvec{\xi })\) is a function of \(\mathbf {V}\) only, \(\mathcal {J}_{\text {M}}[\Psi _{k_10}]\) cannot be coupled to moments involving the angular velocity, so that \(Y_{k_10\mid \ell _1\ell _2}=0\) if \(\ell _2\ne 0\) (6 coefficients). Next, the angular and translational velocities are uncorrelated and, therefore, \(\mathcal {J}_{\text {M}}[\Psi _{k_1k_2}]=\langle \Psi _{k_2}^{(2)}(\varvec{\omega })\rangle \mathcal {J}_{\text {M}}[\Psi _{k_1}^{(1)}]\) and \(\langle \Psi _{k_1k_2}(\varvec{\xi })\rangle =\langle \Psi _{k_1}^{(1)}(\mathbf {V})\rangle \langle \Psi _{k_2}^{(2)}(\varvec{\omega })\rangle \). This justifies the vanishing of the remaining 27 coefficients, as well as the relations $$\begin{aligned} \chi _{22\mid 22}^{(1)}+\chi _{22\mid 22}^{(2)}=\chi _{20\mid 20}, \end{aligned}$$ (B14a) $$\begin{aligned} \overline{\chi }_{22\mid 22}^{(2)}+\overline{\chi }_{22\mid 22}^{(3)}=\frac{1}{3}\left( \chi _{20\mid 20}-\psi _{20\mid 20}\right) , \quad \overline{\chi }_{22\mid 22}^{(1)}+\overline{\chi }_{22\mid 22}^{(4)}=\psi _{20\mid 20}. \end{aligned}$$ (B14b) Equations (B14a and B14b) imply \(\mathcal {J}_{\text {M}}[V^2\omega ^2]=\langle \omega ^2\rangle \mathcal {J}_{\text {M}}[V^2]\) and \(\mathcal {J}_{\text {M}}[(\mathbf {V}\cdot \varvec{\omega })^2]=\langle \varvec{\omega }\varvec{\omega }\rangle :\mathcal {J}_{\text {M}}[\mathbf {V}\mathbf {V}]\), respectively. Finally, the coefficients \(\chi _{20\mid 20}\), \(\psi _{20\mid 20}\), \(\varphi _{30\mid 30}\), \(\chi _{40\mid 40}^{(1)}\), \(\chi _{40\mid 40}^{(2)}\), and \(\chi _{40\mid 40}^{(3)}\) agree with previous results for the IMM [2, 39, 40, 42,43,44]. 2.2. Pidduck Model In case (ii) the total energy is conserved by collisions, i.e., \(\mathcal {J}_{\text {M}}[mV^2+I\omega ^2]=0\). This is guaranteed by the relations $$\begin{aligned} \chi _{20\mid 20}+\frac{\kappa }{4}\chi _{02\mid 20}=0,\quad \chi _{20\mid 02}+\frac{\kappa }{4}\chi _{02\mid 02}=0. \end{aligned}$$ (B15) Other tests in case (ii) correspond to the fact that \(\mathcal {J}_{\text {M}}[\Psi ]=0\) for any \(\Psi (\varvec{\xi })\) if the system is at equilibrium, in which case all the anisotropic moments vanish and \(\langle \omega ^2\rangle =\frac{m}{I}\langle V^2\rangle =\frac{4}{\kappa \sigma ^2}\langle V^2\rangle \), \(\langle V^2\omega ^2\rangle =3\langle \left( \mathbf {V}\cdot \varvec{\omega }\right) ^2\rangle =\langle V^2\rangle \langle \omega ^2\rangle \), \(\langle V^4\rangle =\frac{5}{3}\langle V^2\rangle ^2\), and \(\langle \omega ^4\rangle =\frac{5}{3}\langle \omega ^2\rangle ^2\). Therefore, according to Table 1, one should have $$\begin{aligned} \chi _{20\mid 20}+\frac{4}{\kappa }\chi _{20\mid 02}=0,\quad \chi _{02\mid 20}+\frac{4}{\kappa }\chi _{02\mid 02}=0, \end{aligned}$$ $$\begin{aligned} {5}\chi _{40\mid 40}^{(1)}+3\chi _{40\mid 40}^{(2)}+\chi _{40\mid 40}^{(3)}+\frac{40}{\kappa }\chi _{40\mid 22}^{(1)}+\frac{160}{\kappa ^2}\chi _{40\mid 04}=0, \end{aligned}$$ $$\begin{aligned} 5\chi _{04\mid 40}+\frac{20}{\kappa }\chi _{04\mid 22}^{(1)}+\frac{8}{\kappa ^2}\left[ 5\chi _{04\mid 04}^{(1)}+3\chi _{04\mid 04}^{(2)}+\chi _{04\mid 04}^{(3)}\right] =0, \end{aligned}$$ (B16c) $$\begin{aligned} 8\chi _{22\mid 40}^{(1)}+&\chi _{22\mid 40}^{(2)}+\frac{4}{\kappa }\left[ 3\chi _{22\mid 22}^{(1)}+3\chi _{22\mid 22}^{(2)}+\chi _{22\mid 22}^{(3)}+\chi _{22\mid 22}^{(4)}\right] \nonumber \\ +&\frac{16}{\kappa ^2}\left[ 8\chi _{22\mid 04}^{(1)}+\chi _{22\mid 04}^{(2)}\right] =0, \end{aligned}$$ (B16d) $$\begin{aligned} \overline{\chi }_{22\mid 40}+\frac{2}{\kappa }\left[ \overline{\chi }_{22\mid 22}^{(1)}+3\overline{\chi }_{22\mid 22}^{(2)}+3\overline{\chi }_{22\mid 22}^{(3)}+\overline{\chi }_{22\mid 22}^{(4)}\right] +\frac{16}{\kappa ^2}\overline{\chi }_{22\mid 04}=0. \end{aligned}$$ (B16e) From the expressions in Tables 3, 4, 5 one can check that Eqs. (B16) are indeed satisfied. Kremer, G.M., Santos, A. Granular Gas of Inelastic and Rough Maxwell Particles. J Stat Phys 189, 23 (2022). https://doi.org/10.1007/s10955-022-02984-6 Granular gas Inelastic collisions Rough particles Maxwell model
CommonCrawl
Article (Ukrainian) Topological and metric properties of sets of real numbers with conditions on their expansions in Ostrogradskii series Baranovskyi O. M., Pratsiovytyi M. V., Torbin H. M. ↓ Abstract | Full text (.pdf) Ukr. Mat. Zh. - 2007. - 59, № 9. - pp. 1155–1168 We study topological and metric properties of the set $$C\left[\overline{O}^1, \{V_n\}\right] = \left\{x:\; x= ∑_n \frac{(−1)^{n−1}}{g_1(g_1 + g_2)…(g_1 + g_2 + … + g_n)},\quad g_k ∈ V_k ⊂ \mathbb{N}\right\}$$ with certain conditions on the sequence of sets $\{V_n\}$. In particular, we establish conditions under which the Lebesgue measure of this set is (a) zero and (b) positive. We compare the results obtained with the corresponding results for continued fractions and discuss their possible applications to probability theory. Rate of convergence to ergodic distribution for queue length in systems of the $M^{θ}/G/1/N$ Bratiychuk A. M. For finite capacity queueing systems of the type M θ/G/1, the convenient formulas for the ergodic distribution of a queue length are obtained. An estimate of a rate of convergence of the distribution of queue length in the trasient regime to the ergodic distribution is obtained and computational algorithms for finding the convergence rate are presented. Article (English) General Kloosterman sums over ring of Gaussian integers Varbanets S. P. Ukr. Mat. Zh. - 2007. - 59, № 9. - pp. 1179-1200 The general Kloosterman sum $K(m, n; k; q)$ over $\mathbb{Z}$ was studied by $S$. Kanemitsu, Y. Tanigawa, Yi. Yuan, Zhang Wenpeng in their research of problem of D. H. Lehmer. In this paper, we obtain the similar estimations of $K(\alpha, \beta; k; \gamma)$ over $\mathbb{Z}[i]$. We also consider the sum $\widetilde{K}(\alpha, \beta; h, q; k)$ which has not an analogue in the ring $\mathbb{Z}$ but it can be used for the inversigation of the second moment of the Hecke zeta-fonction of field $\mathbb{Q}(i)$. Approximation of ( ψ, β )-differentiable functions defined on the real axis by Weierstrass operators Kalchuk I. V. Asymptotic equalities are obtained for upper bounds of approximations by the Weierstrass operators on the functional classes $\widehat{C}^{\psi}_{\beta, \infty}$ and $\widehat{L}^{\psi}_{\beta, 1}$ in metrics of the spaces $\widehat{C}$ and $\widehat{L}_1$, respectively. Article (Russian) On moduli of smoothness and Fourier multipliers in $L_p, 0 < p < 1$ Kolomoitsev Yu. S. We obtain the theorem on the relationship between a modulus of smoothness and the best approximation in L p , 0 < p < 1, and theorems on the extension of functions with the preservation of the modulus of smoothness in L p , 0 < p < 1. In addition, we present a complete description of multipliers of periodic functions in the spaces L p , 0 < p < 1. Invariants of knots, surfaces in R 3, and foliations Plakhta L. P. We give a survey of some known results related to combinatorial and geometric properties of finite-order invariants of knots in a three-dimensional space. We study the relationship between Vassiliev invariants and some classical numerical invariants of knots and point out the role of surfaces in the investigation of these invariants. We also consider combinatorial and geometric properties of essential tori in standard position in closed braid complements by using the braid foliation technique developed by Birman, Menasco, and other authors. We study the reductions of link diagrams in the context of finding the braid index of links. Approximation of holomorphic functions by Taylor-Abel-Poisson means Savchuk V. V. We investigate approximations of functions $f$ holomorphic in the unit disk by means $A_{\rho, r}(f)$ for $\rho \rightarrow 1_-$. In terms of an error of the approximation by these means, the constructive characteristic of classes of holomorphic functions $H_p^r \text{\;Lip\,}\alpha$ is given. The problem of the saturation of $A_{\rho, r}(f)$ in the Hardy space $H_p$ is solved. On Schur classes for modules over group rings Chupordya V. A., Semko N. N. We consider the problem of the coupling between a factor-module $A / C_A(G)$ and a submodule $A(\omega RG)$, where $G$ is a group, $R$ is a ring, and $A$ is an $RG$-module. It is possible to consider $C_A (G)$ as an analog of the center of the group and the submodule $A(\omega RG)$ as an analog of the derived subgroup of the group. Expansion of weighted pseudoinverse matrices with singular weights into matrix power products and iteration methods Deineka V. S., Galba E. F., Sergienko I. V. We obtain expansions of weighted pseudoinverse matrices with singular weights into matrix power products with negative exponents and arbitrary positive parameters. We show that the rate of convergence of these expansions depends on a parameter. On the basis of the proposed expansions, we construct and investigate iteration methods with quadratic rate of convergence for the calculation of weighted pseudoinverse matrices and weighted normal pseudosolutions. Iteration methods for the calculation of weighted normal pseudosolutions are adapted to the solution of least-squares problems with constraints. Stability of a dynamical system with semi-Markov switchings under conditions of diffusion approximation Chabanyuk Ya. M. We obtain sufficient conditions for the stability of a dynamical system in a semi-Markov medium under the conditions of diffusion approximation by using asymptotic properties of the compensation operator for a semi-Markov process and properties of the Lyapunov function for an averaged system.
CommonCrawl
Electronic structure beyond the generalized gradient approximation for ${\mathrm{Ni}}_{2}\mathrm{MnGa}$ Baigutlin, D. R. Faculty of Physics, Chelyabinsk State University, 454001 Chelyabinsk, Russia - Department of Physics, School of Engineering Science, LUT University, FI-53850 Lappeenranta, Finland Sokolovskiy, V. V. Faculty of Physics, Chelyabinsk State University, 454001 Chelyabinsk, Russia Miroshkina, O. N. Faculty of Physics, Chelyabinsk State University, 454001 Chelyabinsk, Russia - Department of Physics, School of Engineering Science, LUT University, FI-53850 Lappeenranta, Finland Zagrebin, M. A. Faculty of Physics, Chelyabinsk State University, 454001 Chelyabinsk, Russia - National Research South Ural State University, 454080 Chelyabinsk, Russia - National University of Science and Technology "MISiS," 119049 Moscow, Russia Nokelainen, J. Department of Physics, School of Engineering Science, LUT University, FI-53850 Lappeenranta, Finland Pulkkinen, Aki Department of Physics, School of Engineering Science, LUT University, FI-53850 Lappeenranta, Finland - Département de Physique and Fribourg Center for Nanomaterials, Université de Fribourg, CH-1700 Fribourg, Switzerland Barbiellini, B. Department of Physics, School of Engineering Science, LUT University, FI-53850 Lappeenranta, Finland - Department of Physics, Northeastern University, Boston, Massachusetts 02115, USA Pussi, K. Department of Physics, School of Engineering Science, LUT University, FI-53850 Lappeenranta, Finland Lähderanta, E. Department of Physics, School of Engineering Science, LUT University, FI-53850 Lappeenranta, Finland Buchelnikov, V. D. Faculty of Physics, Chelyabinsk State University, 454001 Chelyabinsk, Russia - National University of Science and Technology "MISiS," 119049 Moscow, Russia Zayak, A. T. Department of Physics & Astronomy, Bowling Green State University, Bowling Green, Ohio 43403, USA Physical Review B. - 2020, vol. 102, no. 4, p. 045127 English The stability of the nonmodulated martensitic phase, the austenitic Fermi surface, and the phonon dispersion relations for ferromagnetic Ni2MnGa are studied using density functional theory. Exchange-correlation effects are considered with various degrees of precision, starting from the simplest local spin density approximation (LSDA), then adding corrections within the generalized gradient approximation (GGA), and finally, including the meta-GGA corrections within the strongly constrained and appropriately normed (SCAN) functional. We discuss a simple procedure to reduce a possible overestimation of magnetization and underestimation of nesting vector in SCAN by parametrically decreasing self-interaction corrections. Département de Physique DOI 10.1103/PhysRevB.102.045127 puk_esb.pdf: 1 puk_esb_sm.pdf: 0
CommonCrawl
The Ross Mathematics Program Previous Summers This page is a work in progress. We invite alumni to record some of their memories of the Ross Program, and to fill in missing information in the stories already posted. Ross Quotes or bon mots Recorded in 1970: Think deeply about simple things. This is food for thought. By the end of the summer you will be haggard but happy. Even the teacher needs encouragement. It is not unreasonable, in proving something, to make use of the hypothesis. The greatest common divisor is the greatest, common, divisor. An approximation to 5 is any number other than 5. I am the only one in this room that has the right to make numerical mistakes, and I take full advantage of that right. We are always limited by our experience. When you arrived here, your view of arithmetic was like that of an experimental physicist: One fact was as good as any other. The "negative of c," or "minus c," but never "negative c." Our greatest enemy is comfort. This is one of the most beautiful and important theorems of number theory. We are here because of a common weakness for mathematics, not, as they write in the newspaper, a common strength. Dr. Ross told many stories about his experiences in life, and many of those stories were often repeated and seemed designed to make some clear point. Often the point was that perseverance pays off: Don't Give Up. Some of those stories are incorporated in the long post about Ross History. Some stories were told only once or twice. One of them was the Michelson anecdote: In the late 1920s a lecture at the University of Chicago was given by Nobel Prize winning physicist A. Michelson, who was famous for his 1887 measurements (with E. Morley) of the speed of light. Since the title of that lecture was "The philosophy of light," the large lecture room was packed with many philosophers as well as scientists. Students (including Ross) had to sit on the floor in the aisles. The distinguished old lecturer began by apologizing for a secretary's misunderstanding: the correct title was "The velocity of light". Michelson then gave a technical talk about recent work refining his earlier measurements. Philosophy professors in the audience were unable to escape. Stories about Arnold Ross. Prologue. When was Dr. Ross's Prologue written? For several years, we thought it first appeared at the German version of the Ross Program in the 1970s, but in 2002 a much earlier copy was found. The document posted here is an edited version, composed by Daniel Shapiro and David Pollack around 2005. The original Ross Prologue was less focused and was apparently intended for beginning math teachers who are thinking about the best ways to teach young students. The Reduced Inventory game. For many years this game of axioms in set theory was available only as "Chapter II" in mimeographed copies of Dr. Ross's unpublished book Towards the Abstract. I conjecture that his book project grew out of the program for math teachers that Ross ran at Notre Dame in the 1950s. Many years later it was published as a separate article: Towards the Abstract, in a 1978 issue of Mathematical Spectrum. The game starts with a list of about forty true statements about sets, involving union, intersection, complement, empty set, and universal set. The basic rule of this game is: A statement may be removed from the list if it can be proved from the remaining statements on the list, using standard rules of logic. Successively cross off as many statements as you can. The remaining statements form our list of axioms, sufficient to prove all the original statements. One purpose of this game is to make clear how a particular list of axioms for a theory is a matter of taste rather than an essential feature of the mathematics. In some years in the 1960s, Ross students were asked to play this game and write down their lists and proofs. A prize (a number theory book) was given to the student with the shortest list of axioms. This is a wonderful exercise, but the minimal list of axioms depends on what exactly are accepted as "standard rules of logic." String tie? When did Dr. Ross begin wearing a string tie rather than a cloth tie? One story says that Ross was on a driving trip through the Southwest, saw some string ties, and realized how much easier they are to put on and take off. Stories about the Ross Program and its participants. [as recalled by D. Shapiro] Pet beetles. In 1966 the Nosker House dormitory was new and Ross students were the first group to live there. The OSU crews had not used much insecticide that summer and many black beetles crawled on the outside of the building. Those beetles were harmless, less than an inch long, and moved fairly slowly. Bob Tax, one of the best students that year, took a liking to those beetles. He captured a few, named each one Ambrose, and usually kept them in an empty coffee can. He would occasionally take one or two to class in a pill bottle, and let it out on his arm or face, nudging the person next to him to witness how cute it looked. In his room, if we were quiet, we could hear them scrabbling away on the bottom of that metal can. For dinner Bob would drop in one Frosted Flake and we would listen to the beetles chewing it. A few people, including a couple of counselors, were disgusted by the Ambrose situation, much to Bob's delight. Knibbling. I first learned about knibbling (the "k" is not silent) at the Program in 1966. Get a metal coat hanger, pull on the straight part to get a bend in the middle, and place the hanger on your index finger with the hook part at the bottom. After some adjustment of the end of the hook, you can balance a penny there. Swing the hanger back and forth with increasing amplitude, until you get the hanger (with penny) to go completely around several times. Then slow it down, get it back to swinging, and finally to stop, all without dropping the penny. Learning to do this took some practice, and pennies tended to fly off the hanger at inconvenient times. The Program record was reached when one student managed to knibble a stack of seven pennies without dropping any. The Program had to pay for the resulting damage to the ceilings in Nosker House. Microwaving a watch. Each dorm room is equipped with a combination refrigerator/microwave unit. One summer day in 1999, a first-year student (not named here to protect the innocent, but we'll call him Eric) wondered what would happen if he microwaved his plastic watch. Putting thought into action, Eric got a quick answer to his question: The digital display bubbled and glowed as it rotated, and then noxious colored smoke billowed out, spreading quite a stench. Someone called 911, a fire truck arrived, and the dormitory was evacuated until the smoke cleared. The next day's riddle: What's the difference between someone making popcorn, and Eric? Answer: One watches the microwave, while the other microwaves the watch. Since then Program leaders tell students: Be careful what you microwave! Origin of Ross Program T-shirt design. Theodore Allen attended the Ross Program at Chicago in 1978 and is currently a physics professor at Hobart & William Smith Colleges. Here is an edited version of a message he sent around 2005: "I have been doing calligraphy since the early 1970s. At the Ross Program in Chicago in 1978, Matthew Wiener convinced me to write up a proof of quadratic reciprocity in Gothic lettering. We went downtown and found a silkscreen maker who made us a screen of the proof. We made T-shirts for any students who brought us a shirt. Several years later, when I was a TA at Caltech (in 1986 or 1987) I saw that Glenn Tesler, one of my students, was wearing a version of that T-shirt! The lettering had gotten ragged on the edges but was still recognizable as my design. Glenn was very surprised to learn that I had done the artwork for the shirt." The Ross Program re-started in Columbus in 1979. Only a couple of people continued on from the Chicago program, and one of them had a QR shirt. The counselors arranged to photocopy the lettering from that shirt and got a local T-shirt store to create new Ross shirts. Within a year or two, the QR shirt became a symbol of the Ross Program. A couple of years later, some counselors decided to redo the lettering because the photocopy process was fuzzy. I think Daishi Harada wrote new calligraphy for the shirt in the same style as before. Perhaps Megumi Harada was also involved with writing that design. That version of the shirt was used for several years. Some time in the early 1990s, Debbie (one of the Ross secretaries) typed up the QR proof using a Gothic font and we used that version. Unfortunately, an error was introduced, with (-1)^{(p-1)/2}(-1)^{(q-1)/2} instead of the correct expression. Those "mint error" T-shirts are now rare and probably valuable. In 1996, for Dr. Ross's Ninetieth Birthday Conference, someone designed artwork for the back of the shirt, with "Ross Mathematics Program" in an arch at the top and "Think Deeply of Simple Things" across the bottom. In May 2007 Megumi Harada was kind enough to re-calligraph the T-shirt design, both front and back, in preparation for the Ross Program's Fiftieth Anniversary Celebration. Current T-shirts use that version. Z hats. For one of the reunions (probably in 2001) we realized that the local T-shirt printing company also offered baseball caps with custom designs. We decided that a capital Z in "blackboard bold" was a great symbol to use, since it stands for the set of integers. Since then the Z-hat has been available for alumni to make a definite fashion statement. Another aspect was also involved with this design. The "completion" of a ring R is a larger ring typically designated by the R with a caret on top, often called a hat. Then Z-hat (in LaTeX: $\widehat{\mathbb Z}$ ) is the completion of Z, equal to the inverse-limit of all factor rings Z/nZ. Around 2005 Karl Zipple always wore his new hat, explaining: "I just don't feel complete without my Z-hat." Entertainment by the counselors. In the late 1960s, the counselors had evening story times. They read aloud to the students, typically some interesting poetry or short story. This task was assigned to counselors on a rotating basis. One of the most memorable of those events was the time Mike Anderson read "The Hunting of the Snark" using a different voice for each character. Even in the early years there was a Talent Show in which Ross Program participants entertained the whole group with performances of various types. Groups of students sometimes perform imitations or parodies of the counselors and instructors, with varying success. In 1967 (or 66), Mark Bolotin was instrumental in writing a "comical musedy" in the style of My Fair Lady, with number theoretic topics in the songs. For instance, the "Rain in Spain" song became "The GCD is just the GCD". A parody of "Try to Remember" song from the Fantasticks included the line: "Try to remember the symbol Legendre, it does not work when q's composite." At a more recent Talent Show, in 2012, Ravi Fernando juggled different sorts of objects. The most memorable object was a Rubik's cube that he solved while juggling. He has posted a YouTube video of this feat. Stories about the Program's instructors and administrators. Father Ivo Thomas (1912 – 1976) was a highly cultured person, with a deep knowledge of the classics, religion, and mathematical logic. His obituary describes his interesting background and career. He was a professor of classics at Notre Dame, but he taught logic at the Ross Program every summer from 1959 to 1974. Students in Father Thomas's classes learned that "detachment" is "implication", and used Polish notation to study various aspects first order logic and modal logic, often with named formulas like "Scotus" and "Tarski." We found a good photo of Father Thomas from those days. (We hope to post it here soon.) Ivo Thomas changed quite a bit in the late 1960s. As the Ross Program group photos show, he wore a suit with a priest's backward collar in 1966, changing to a regular suit a couple of years later, and then to less formal clothes. In 1972 he was released from priestly duties and married a female colleague at Notre Dame. In 1967 Father Thomas was apparently asked by Dr. Ross to stop by the dorm and interact with the kids. He showed up one evening and proceeded to teach several of us various string tricks (similar to Cat's Cradle). We learned how to twist a loop of string into Jacob's Ladder, Seagull, Fishies, Pyramid, Whale, Vanishing Pear, Two Ptarmigan, etc. The practice of those string figures became e Ross Program tradition for several years. Harold Brown was a math faculty member at OSU and Assistant Director of the Ross Program during the late 1960s. We found a photo of Brown from those years. (To be posted here.) In the early 1970s he made some changes in his life, getting a toupee, a motorcycle, and a divorce. He quit his academic position and becoming a computer scientist associated with Stanford University. Hans Zassenhaus taught Experimental Number Theory at the Ross Program for a few years in the 1960s and early 1970s. He and Jill Yaqub also collaborated on teaching a course in projective geometry, but Zassenhaus sometimes fell asleep in the back of the room while Yaqub was lecturing. We didn't joke too loudly about that. Kurt Mahler taught courses in the geometry of numbers at the Program during the late 1960s. He was a professor at Ohio State for several years in the 1970s, and then moved to the Australian National University. He was short and stout, often wore a hat and a dark suit, and used a thick cane to help him walk. (He appears in the Ross group pictures in 1969, 1970, 1971, and 1973.) Mahler had a German accent and used old German script letters as his mathematical symbols when lecturing. He seldom erased anything, getting nearly to the end of the chalkboard just as the bell would ring to end class. Mahler was a camera buff and took lots of photos wherever he went. I remember during one quarter at OSU Mahler taught a course on "mathematical Chinese." Gloria Woods started working closely with Arnold Ross after his program moved back to Ohio State in 1979. In the 1980s she handled most of the Program's administration and dormitory supervision, because Arnold was so deeply involved with his wife Bee's illness. In Brooklyn she attended an arts high school with few math requirements and had planned on a career involving art (mostly water colors). Some years later she attended a "Moore Method" topology course at a graduate program at the University of Miami (Florida), and she loved the mathematical ideas. She moved to the math graduate school at the University of Michigan (around 1956 - 1958), meeting interesting mathematicians there like Raoul Bott and Steve Smale. Later she transferred to math grad school at Tulane University in New Orleans, where she met Alan Woods, who was a postdoc there. They married and Alan was hired in 1963 by Arnold Ross, who had just become chair at Ohio State. When asked how she got started at a math graduate program, and why she moved from one math graduate program to another, she would reply with a smile: "Oh. You know . . . Boyfriends." At the urging of Dr. Ross, Gloria pursued a doctorate in math education and graduated in 1981 with a PhD dissertation that analyzed the teaching methods in the Ross Program. Ross Program Jokes Official Memo: Readjustment to Civilian Life, a document that was probably distributed in 1966. Riddle: What do integral rabbits eat? (Lattice.) Methods of Proof. Numberwacky, by Jordan Pollack (1973): T'was summer, and the problem sets grew harder and harder on the brain. Quadratic reciprocity can make one go insane ... Auto JC, a flow chart that can substitute for an actual Junior Counselor. Wall ball ? Mod ten ? 24 ? No mention was made of Conrad races. © 2020 Ross Mathematics Program. Powered by Jekyll & Minimal Mistakes. Contact us at 1 (844) 262-8409.
CommonCrawl
References of "van Dam, Tonie 50003245" Recent Advances on GNSS Multipath Reflectometry (GNSS-MR) for Sea and Lake Level Studies van Dam, Tonie ; Tabibi, Sajad ; Geremia-Nievinski, F. et al E-print/Working paper (2018) Global navigation satellite system multipath reflectometry (GNSS-MR) has been used to exploit signals of opportunity at L-band for ground-based sea and lake level studies at several locations in the last ... [more ▼] Global navigation satellite system multipath reflectometry (GNSS-MR) has been used to exploit signals of opportunity at L-band for ground-based sea and lake level studies at several locations in the last few years. Although geodetic-quality antennas are designed to boost the direct transmission from the satellite and to suppress indirect surface reflections, the delay of reflections with respect to the line-of-sight propagation can be used to estimate the water-surface level in a stable terrestrial reference frame. In this contribution, signal-to-noise ratio (SNR) observations from commercial off-the-shelf systems are used to retrieve water level at multiple constellations and modulations. We constrained phase-shifts so as yield more precise reflector heights and further corrected for the tropospheric propagation delays for greater accuracy. We assess GNSS-MR accuracy and precision in two cases. In the first one, using the inversion formal uncertainty and modulation-specific variance factors, reflector heights are combined and converted to water level at hourly epoch spacing and eight-hourly averaging window length. The RMSE between GNSS-MR and tide gauge (TG) records for a single station in the Great Lakes is 1.93 cm for a 12-year period. In the second case, we employ an extended dynamic model, taking tidal velocity and acceleration into account, which is applied for ten stations worldwide. Regression slope between GNSS-MR and TG exhibits a smaller deviation from the ideal 1:1 relationship, compared to the conventional dynamic model (with no acceleration). The RMSE between sub-hourly GNSS-MR and TG is 1.98 cm, with 0.998 correlation coefficient. Tidal constituents agree at the sub-mm level between GNSS-MR and TG. [less ▲] Detailed reference viewed: 129 (21 UL) Statistical Comparison and Combination of GPS, GLONASS, and Multi-GNSS Multipath Reflectometry Applied to Snow Depth Retrieval Tabibi, Sajad ; Geremia-Nievinski, Felipe; van Dam, Tonie in IEEE Transactions on Geoscience and Remote Sensing (2017), (99), Global navigation satellite system (GNSS) multipath reflectometry (MR) has emerged as a new technique that uses signals of opportunity broadcast by GNSS satellites and tracked by ground-based receivers to ... [more ▼] Global navigation satellite system (GNSS) multipath reflectometry (MR) has emerged as a new technique that uses signals of opportunity broadcast by GNSS satellites and tracked by ground-based receivers to retrieve environmental variables such as snow depth. The technique is based on the simultaneous reception of direct or line-of-sight (LOS) transmissions and corresponding coherent surface reflections (non-LOS). Until recently, snow depth retrieval algorithms only used legacy and modernized GPS signals. Using multiple GNSS constellations for reflectometry would improve GNSS-MR applications by providing more observations from more satellites and independent signals (carrier frequencies and code modulations). We assess GPS and GLONASS for combined multi-GNSS-MR using simulations as well as field measurements. Synthetic observations for different signals indicated a lack of detectable interfrequency and intercode biases in GNSS-MR snow depth retrievals. Received signals from a GNSS station continuously operating in France for a two-winter period are used for experimental snow depth retrieval. We perform an internal validation of various GNSS signals against the proven GPS-L2-C signal, which was validated externally against in situ snow depth in previous studies. GLONASS observations required a more complex handling to account for topography because of its particular ground track repeatability. Signal intercomparison show an average correlation of 0.922 between different GPS snow depths and GPS-L2-CL, while GLONASS snow depth retrievals have an average correlation that exceeds 0.981. In terms of precision and accuracy, legacy GPS signals are worse, while GLONASS signals and modernized GPS signals are of comparable quality. Finally, we show how an optimal multi-GNSS combined daily snow depth time series can be formed employing variance factors with a ~59%-90% precision improvement compared to individual signal snow depth retrievals, resulting in snow depth retrieval with uncertainty of 1.3 cm. The developed combination strategy can also be applied for the European Galileo and the Chines BeiDou navigation systems. [less ▲] Using GPS and absolute gravity observations to separate the effects of present-day and Pleistocene ice-mass changes in South East Greenland van Dam, Tonie ; Francis, Olivier ; Wahr, J. et al in Earth and Planetary Science Letters (2017), 459 Measurements of vertical crustal uplift from bedrock sites around the edge of the Greenland ice sheet (GrIS) can be used to constrain present day mass loss. Interpreting any observed crustal displacement ... [more ▼] Measurements of vertical crustal uplift from bedrock sites around the edge of the Greenland ice sheet (GrIS) can be used to constrain present day mass loss. Interpreting any observed crustal displacement around the GrIS in terms of present day changes in ice is complicated, however, by the glacial isostatic adjustment (GIA) signal. With GPS observations alone, it is impossible to separate the uplift driven by present day mass changes from that due to ice mass changes in the past. Wahr et al. (1995) demonstrated that viscoelastic surface displacements were related to the viscoelastic gravity changes through a proportionality constant that is nearly independent of the choice of Earth viscosity or ice history model. Thus, by making measurements of both gravity and surface motion at a bedrock site, the viscoelastic effects could be removed from the observations and we would be able to constrain present day ice mass changes. Alternatively, we could use the same observations of surface displacements and gravity to determine the GIA signal. In this paper, we extend the theory of Wahr et al. (1995) by introducing a constant, Z, that represents the ratio between the elastic changes in gravity and elastic uplift at a particular site due to present day mass changes. Further, we combine 20 yrs of GPS observations of uplift with eight absolute gravity observations over the same period to determine the GIA signal near Kulusuk, a site on the southeastern side of the GrIS, to experimentally demonstrate the theory. We estimate that the GIA signal in the region is 4.49 ± 1.44 mm/yr and is inconsistent with most previously reported model predictions that demonstrate that the GIA signal here is negative. However, as there is very little in situ data to constrain the GIA rate in this part of Greenland, the Earth model or the ice history reconstructions could be inaccurate (Khan et al., 2016). Improving the estimate of GIA in this region of Greenland will allow us to better determine the present day changes in ice mass in the region, e.g. from GRACE. [less ▲] GRACE era variability in the Earth's oblateness: A comparison of estimates from six different sources Meyrath, Thierry ; Rebischung, Paul; van Dam, Tonie in Geophysical Journal International (2017), 208(2), 1126-1138 We study fluctuations in the degree-2 zonal spherical harmonic coefficient of the Earth's gravity potential, $C_{20}$, over the period 2003-2015. This coefficient is related to the Earth's oblateness and ... [more ▼] We study fluctuations in the degree-2 zonal spherical harmonic coefficient of the Earth's gravity potential, $C_{20}$, over the period 2003-2015. This coefficient is related to the Earth's oblateness and studying its temporal variations, $\Delta C_{20}$, can be used to monitor large-scale mass movements between high and low latitude regions. We examine $\Delta C_{20}$ inferred from six different sources, including satellite laser ranging (SLR), GRACE and global geophysical fluids models. We further include estimates that we derive from measured variations in the length-of-day (LOD), from the inversion of global crustal displacements as measured by GPS, as well as from the combination of GRACE and the output of an ocean model as described by \cite{sunetal2016}. We apply a sequence of trend- and seasonal moving average filters to the different time series in order to decompose them into an interannual, a seasonal and an intraseasonal component. We then perform a comparison analysis for each component, and we further estimate the noise level contained in the different series using an extended version of the three-cornered-hat method. For the seasonal component, we generally obtain a very good agreement between the different sources, and except for the LOD-derived series, we find that over 90\% of the variance in the seasonal components can be explained by the sum of an annual and semiannual oscillation of constant amplitudes and phases, indicating that the seasonal pattern is stable over the considered time period. High consistency between the different estimates is also observed for the intraseasonal component, except for the solution from GRACE, which is known to be affected by a strong tide-like alias with a period of about 161 days. Estimated interannual components from the different sources are generally in agreement with each other, although estimates from GRACE and LOD present some discrepancies. Slight deviations are further observed for the estimate from the geophysical models, likely to be related to the omission of polar ice and groundwater changes in the model combination we use. On the other hand, these processes do not seem to play an important role at seasonal and shorter time scales, as the sum of modelled atmospheric, oceanic and hydrological effects effectively explains the observed $C_{20}$ variations at those scales. We generally obtain very good results for the solution from SLR, and we confirm that this well-established technique accurately tracks changes in $C_{20}$. Good agreement is further observed for the estimate from the GPS inversion, showing that this indirect method is successful in capturing fluctuations in $C_{20}$ on scales ranging from intra- to interannual. Obtaining accurate estimates from LOD, however, remains a challenging task and more reliable models of atmospheric wind fields are needed in order to obtain high-quality $\Delta C_{20}$, in particular at the seasonal scale. The combination of GRACE data and the output of an ocean model appears to be a promising approach, particularly since corresponding $\Delta C_{20}$ is not affected by tide-like aliases, and generally gives better results than the solution from GRACE, which still seems to be of rather poor quality. [less ▲] Seasonal low-degree changes in terrestrial water mass load from global GNSS measurements Meyrath, Thierry ; van Dam, Tonie ; Collilieux, Xavier et al in Journal of Geodesy (2017), 91(11), 1329-1350 Large-scale mass redistribution in the terrestrial water storage (TWS) leads to changes in the low-degree spherical harmonic coefficients of the Earth's surface mass density field. Studying these low ... [more ▼] Large-scale mass redistribution in the terrestrial water storage (TWS) leads to changes in the low-degree spherical harmonic coefficients of the Earth's surface mass density field. Studying these low-degree fluctuations is an important task that contributes to our understanding of continental hydrology. In this study, we use global GNSS measurements of vertical and horizontal crustal displacements that we correct for atmospheric and oceanic effects, and use a set of modified basis functions similar to Clarke et al. (2007) to perform an inversion of the corrected measurements in order to recover changes in the coefficients of degree-0 (hydrological mass change), degree-1 (center of mass shift) and degree-2 (flattening of the Earth) caused by variations in the TWS over the period January 2003 - January 2015. We infer from the GNSS-derived degree-0 estimate an annual variation in total continental water mass with an amplitude of $(3.49 \pm 0.19) \times 10^{3}$ Gt and a phase of $70 \pm 3^{\circ}$ (implying a peak in early March), in excellent agreement with corresponding values derived from the Global Land Data Assimilation System (GLDAS) water storage model that amount to $(3.39 \pm 0.10) \times 10^{3}$ Gt and $71 \pm 2^{\circ}$, respectively. The degree-1 coefficients we recover from GNSS predict annual geocentre motion (i.e. the offset change between the center of common mass and the center of figure) caused by changes in TWS with amplitudes of $0.69 \pm 0.07$ mm for GX, $1.31 \pm 0.08$ mm for GY and $2.60 \pm 0.13$ mm for GZ. These values agree with GLDAS and estimates obtained from the combination of GRACE and the output of an ocean model using the approach of Swenson et al. (2008) at the level of about 0.5, 0.3 and 0.9 mm for GX, GY and GZ, respectively. Corresponding degree-1 coefficients from SLR, however, generally show higher variability and predict larger amplitudes for GX and GZ. The results we obtain for the degree-2 coefficients from GNSS are slightly mixed, and the level of agreement with the other sources heavily depends on the individual coefficient being investigated. The best agreement is observed for $T_{20}^C$ and $T_{22}^S$, which contain the most prominent annual signals among the degree-2 coefficients, with amplitudes amounting to $(5.47 \pm 0.44) \times 10^{-3}$ and $(4.52 \pm 0.31) \times 10^{-3}$ m of equivalent water height (EWH), respectively, as inferred from GNSS. Corresponding agreement with values from SLR and GRACE is at the level of or better than $0.4 \times 10^{-3}$ and $0.9 \times 10^{-3}$ m of EWH for $T_{20}^C$ and $T_{22}^S$, respectively, while for both coefficients, GLDAS predicts smaller amplitudes. Somewhat lower agreement is obtained for the order-1 coefficients, $T_{21}^C$ and $T_{21}^S$, while our GNSS inversion seems unable to reliably recover $T_{22}^C$. For all the coefficients we consider, the GNSS-derived estimates from the modified inversion approach are more consistent with the solutions from the other sources than corresponding estimates obtained from an unconstrained standard inversion. [less ▲] Annual variations in GPS-measured vertical displacements near Upernavik Isstrøm (Greenland) and contributions from surface mass loading Liu, Lin; Khan, Shfaqat Abbas; van Dam, Tonie et al in Journal of Geophysical Research: Solid Earth (2017) Geodetic measurements reveal similarities between post–Last Glacial Maximum and present-day mass loss from the Greenland ice sheet Knudsen, Shfaqat; Bamber, Ingo; Bevis, Michael et al in Science Advances (2016), 2(9), Accurate quantification of the millennial-scale mass balance of the Greenland ice sheet (GrIS) and its contribution to global sea-level rise remain challenging because of sparse in situ observations in ... [more ▼] Accurate quantification of the millennial-scale mass balance of the Greenland ice sheet (GrIS) and its contribution to global sea-level rise remain challenging because of sparse in situ observations in key regions. Glacial isostatic adjustment (GIA) is the ongoing response of the solid Earth to ice and ocean load changes occurring since the Last Glacial Maximum (LGM; ~21 thousand years ago) and may be used to constrain the GrIS deglaciation history. We use data from the Greenland Global Positioning System network to directly measure GIA and estimate basin wide mass changes since the LGM. Unpredicted, large GIA uplift rates of +12 mm/year are found in southeast Greenland. These rates are due to low upper mantle viscosity in the region, from when Greenland passed over the Iceland hot spot about 40 million years ago. This region of concentrated soft rheology has a profound influence on reconstructing the deglaciation history of Greenland. We reevaluate the evolution of the GrIS since LGM and obtain a loss of 1.5-m sea-level equivalent from the northwest and southeast. These same sectors are dominating modern mass loss. We suggest that the present destabilization of these marine-based sectors may increase sea level for centuries to come. Our new deglaciation history and GIA uplift estimates suggest that studies that use the Gravity Recovery and Climate Experiment satellite mission to infer present-day changes in the GrIS may have erroneously corrected for GIA and underestimated the mass loss by about 20 gigatons/year. [less ▲] Study of space weather impact on Antarctica ionosphere from GNNS data Bergeot, Nicolas; Chevalier, J.-M.; Bruyninx, Carine et al Poster (2016, April 29) The impact of solar activity on the ionosphere at polar latitudes is not well known compare to low and mid-latitudes due to lack of experimental observations, especially over Antarctica. Consequently, one ... [more ▼] The impact of solar activity on the ionosphere at polar latitudes is not well known compare to low and mid-latitudes due to lack of experimental observations, especially over Antarctica. Consequently, one of the present challenges of the Space Weather community is to better characterize (1) the climatological behavior of the polar ionosphere in response to variations of the solar activity and (2) the different response of the ionosphere at high latitudes during extreme solar events and geomagnetic storms. For that, the combination of GNSS measurements (e.g. GPS, GLONASS and Galileo) on two separate frequencies allows determining the ionospheric delay between a ground receiver and a satellite. This delay is function of the integrated number of electrons encountered in the ionosphere along the signal ray path, called the Total Electron Content (TEC). It is thus possible to study the behavior of ionospheric TEC at different time and spatial scales from the observations of a network of permanent GNSS stations. In the frame of GIANT-LISSA and IceCon projects we installed since 2009 five GNSS stations around the Princess Elisabeth station. We used these stations additionally to other stations from the IGS global network to estimate the ionospheric TEC at different locations over Antarctica. This study presents this regional data set during different solar activity levels and discusses the different climatological behaviors identified in the ionosphere at these high latitudes. Finally, we will show few examples of typical TEC disturbances observed during extreme solar events. [less ▲] Hybrid mesh/particle meshless method for modeling geological flows with discontinuous transport properties Bourantas, Georgios ; Lavier, Luc; van Dam, Tonie et al In the present paper, we introduce the Finite Difference Method-Meshless Method (FDM-MM) in the context of geodynamical simulations. The proposed numerical scheme relies on the well-established FD method ... [more ▼] In the present paper, we introduce the Finite Difference Method-Meshless Method (FDM-MM) in the context of geodynamical simulations. The proposed numerical scheme relies on the well-established FD method along with the newly developed "meshless" method and, is considered as a hybrid Eulerian/Lagrangian scheme. Mass, momentum, and energy equations are solved using an FDM method, while material properties are distributed over a set of markers (particles), which represent the spatial domain, with the solution interpolated back to the Eulerian grid. The proposed scheme is capable of solving flow equations (Stokes flow) in uniform geometries with particles, "sprinkled" in the spatial domain and is used to solve convection- diffusion problems avoiding the oscillation produced in the Eulerian approach. The resulting algebraic linear systems were solved using direct solvers. Our hybrid approach can capture sharp variations of stresses and thermal gradients in problems with a strongly variable viscosity and thermal conductivity as demonstrated through various benchmarking test cases. The present hybrid approach allows for the accurate calculation of fine thermal structures, offering local type adaptivity through the flexibility of the particle method. [less ▲] A comparison of interannual hydrological polar motion excitation from GRACE and geodetic observations Meyrath, Thierry ; van Dam, Tonie in Journal of Geodynamics (2016), 99 Continental hydrology has a large influence on the excitation of polar motion (PM). However, these effects are far from being completely understood. Current global water storage models differ ... [more ▼] Continental hydrology has a large influence on the excitation of polar motion (PM). However, these effects are far from being completely understood. Current global water storage models differ significantly from one another and are unable to completely represent the complex hydrological cycle, particularly at interannual scales. A promising alternative to study hydrological effects on PM is given by the GRACE satellite mission. In this study, we assess the ability of GRACE to investigate interannual hydrological PM excitations. For this purpose, we use the latest GRACE Release-05 data from three different processing centers (CSR, GFZ, JPL) that we convert into estimates of hydrological PM excitation, $\chi_1^H$ and $\chi_2^H$. In addition to these gravimetric excitations, we also consider geodetic hydrological excitations, which we calculate by removing modelled atmospheric and oceanic effects from precise observations of full PM excitations. We remove signals with frequencies $\geq 1$ cpy from the series and compare the resulting estimates of interannual hydrological excitations for the period 2004.5 - 2014.5. The comparison between geodetic and gravimetric excitations reveals some discrepancies for $\chi_1^H$, likely to be related to inadequately modelled atmospheric and oceanic effects. On the other hand, good agreement is observed for $\chi_2^H$. For both components, the best agreement between geodetic and gravimetric excitations is obtained for the estimate from CSR. Very good agreement is obtained between GRACE-derived excitations from different processing centers, in particular for CSR and JPL. Both the comparisons between geodetic and gravimetric, and the comparisons between the different gravimetric excitations give substantially better results for $\chi_2^H$ than for $\chi_1^H$, leading to the conclusion that geodetic and gravimetric $\chi_2^H$ can be more reliably determined than $\chi_1^H$. Although there are still some discrepancies between geodetic and gravimetric interannual hydrological excitations, we conclude that GRACE and potential follow-on missions are valuable tools to study the interannual effects of continental hydrology on the excitation of PM. [less ▲] The Phase 2 North America Land Data Assimilation System (NLDAS-2) Products for Modeling Water Storage Displacements for Plate Boundary Observatory GPS Stations Li, Zhao ; van Dam, Tonie in International Association of Geodesy Symposia (2015) Detailed reference viewed: 142 (6 UL) Hybrid mesh/particle meshless method for geological flows with discontinuous transport properties Bourantas, Georgios ; Lavier, Luc; Claus, Susanne et al Scientific Conference (2015, April 12) Geodynamic modeling is an important branch of Earth Sciences. Direct observation of geodynamic processes is limited in both time and space, while on the other hand numerical methods are capable of ... [more ▼] Geodynamic modeling is an important branch of Earth Sciences. Direct observation of geodynamic processes is limited in both time and space, while on the other hand numerical methods are capable of simulating millions of years in a matter of days on a desktop computer. The model equations can be reduced to a set of Partial Differential Equations with possibly discontinuous coefficients, governing mass, momentum and heat transfer over the domain. Some of the major challenges associated with such simulations are (1) geological time scales, which require long (in physical time) simulations using small time steps; (2) the presence of localization zones over which large gradients are present and which are much smaller than the overall physical dimensions of the computational domain and require much more refined discretization than for the rest of the domain, much like in fracture or shear band mechanics. An added difficulty is that such structures in the solution may appear after long periods of stagnant behaviour; (3) the definition of boundary conditions, material parameters and that of a suitable computational domain in terms of size; (4) a posteriori error estimation, sensitivity analysis and discretization adaptivity for the resulting coupled problem, including error propagation between different unknown fields. Consequently, it is arguable that any suitable numerical methods aimed at the solution of such problems on a large scale must be able to (i) provide ease of discretization refinement, including possible partition of unity enrichment; (ii) offer a large stability domain, so that "large" time steps can be chosen; (iii) ease of parallelization and good scalability. Our approach is to rely on "meshless" methods based on a point collocation strategy for the discretization of the set of PDEs. The method is hybrid Eulerian/Lagrangian, which enables to switch easily between stagnant periods and periods of localization. Mass and momentum equations are solved using a meshless point collocation Eulerian method, while energy equation are solved using a set of particles, distributed over the spatial domain, with the solution interpolated back to the Eulerian grid at every time step. This hybrid approach allows for the accurate calculation of fine thermal structures, through the ease of adaptivity offered by the flexibility of the particle method. The approximation space is constructed using the Discretization Correction Particle Strength Exchange (DC PSE) method. The proposed scheme gives the capability of solving flow equations (Stokes flow) in fully irregular geometries while particles, "sprinkled" in the spatial domain, are used to solve convection-diffusion problems avoiding the oscillation produced in the Eulerian approach. The resulting algebraic linear systems were solved using direct solvers. Our hybrid approach can capture sharp variations of stresses and thermal gradients in problems with a strongly variable viscosity and thermal conductivity as demonstrated through various benchmarking test cases such as the development of Rayleigh-Taylor instabilities, viscous heating and flows with non-Newtonian rheology. [less ▲] A warmer world van Dam, Tonie ; Weigelt, Matthias ; Jäggi, Adrian in Pan European Networks: Science & Technology (2015), (14), 58-59 Quality Evaluation of the Weekly Vertical Loading Effects Induced from Continental Water Storage Models Li, Zhao ; van Dam, Tonie ; Collilieux, Xavier et al in Willis, Pascal (Ed.) Proceedings of the 2013 IAG Scientific Assembly, Potsdam, Germany, 1-6 September, 2013 (2015) To remove continental water storage (CWS) signals from the GPS data, CWS mass models are needed to obtain predicted surface displacements. We compared weekly GPS height time series with five CWS models ... [more ▼] To remove continental water storage (CWS) signals from the GPS data, CWS mass models are needed to obtain predicted surface displacements. We compared weekly GPS height time series with five CWS models: (1) the monthly and (2) three-hourly Global Land Data Assimilation System (GLDAS); (3) the monthly and (4) one-hourly Modern- Era Retrospective Analysis for Research and Applications (MERRA); (5) the six-hourly National Centers for Environmental Prediction-Department of Energy (NCEP-DOE) global reanalysis products (NCEP-R-2). We find that of the 344 selected global IGS stations, more than 77% of stations have their weighted root mean square (WRMS) reduced in the weekly GPS height by using both the GLDAS and MERRA CWS products to model the surface displacement, and the best improvement concentrate mainly in North America and Eurasia.We find that the one-hourly MERRA-Land dataset is the most appropriate product for modeling weekly vertical surface displacement caused by CWS variations. The threehourly GLDAS data ranks the second, while the GLDAS and MERRA monthly products rank the third. The higher spatial resolution MERRA product improves the performance of the CWS model in reducing the scatter of the GPS height by about 2–6% compared with the GLDAS. Under the same spatial resolution, the higher temporal resolution could also improve the performance by almost the same magnitude. We also confirm that removing the ATML and NTOL effects from the weekly GPS height would remarkably improve the performance of CWS model in correcting the GPS height by at least 10%, especially for coastal and island stations. Since the GLDAS product has a much greater latency than the MERRA product, MERRA would be a better choice to model surface displacements from CWS. Finally, we find that the NCEP-R-2 data is not sufficiently precise to be used for this application. Further work is still required to determine the reason. [less ▲] Assessment of modernized GPS L5 SNR for ground-based multipath reflectometry applications Tabibi, Sajad ; Nievinski, Felipe G.; van Dam, Tonie et al in Advances in Space Research (2015), 55(4), 1104-1116 The new Horizon2020 "European Gravity Service for Improved Emergency Management" project A new service for gravity field products and to support emergency response to hydrological extreme events Weigelt, Matthias ; Jäggi, Adrian; Flechtner, Frank et al Poster (2014, October 14) A methodology to choose the orbit for a double-pair-scenario future gravity satellite mission: Experiences from the SC4MGV project Weigelt, Matthias ; Iran Pour, Siavash; Murböck, Michael et al Scientific Conference (2014, September 30) Seasonal Variations of Low-degree Spherical Harmonics Derived from GPS Data and Loading Models Wei, Na ; van Dam, Tonie ; Weigelt, Matthias et al How well can the combination of hlSST and SLR replace GRACE? A discussion from the point of view of applications Weigelt, Matthias ; van Dam, Tonie ; Baur, Oliver et al Genetic-algorithm based search strategy for optimal scenarios of future dual-pair gravity satellite missions Iran Pour, Siavash; Reubelt, Tilo; Weigelt, Matthias et al Poster (2014, June)
CommonCrawl
1998 - 1999 CS Annual Report Publications Listed by Author Aguilera, M. Using the Heartbeat Failure Detector for Quiescent Reliable Communication and Consensus in Partitionable Networks. Theoretical Computer Science 220, 1 (June 1999), 3-30 (with W. Chen and S. Toueg). —. Randomization and Failure Detection: A Hybrid Approach to Solve Consensus. SIAM Journal of Computing 28, 3 (June 1999), 890-903 (with S. Toueg). —. Matching Events in a Content-based Subscription System. In Proceedings of the 18th ACM Symposium on Principles of Distributed Computing (May 1999), 53-61 (with R. Strom, D. Sturman, M. Astley and T. Chandra). —. Consensus in the Crash-Recovery Model. In Proceedings of the 12th International Symposium on Distributed Computing (Sept 1998), 231-245 (with W. Chen and S. Toueg). Chang, C. Interfacing Java with the Virtual Interface Architecture, Proceedings of ACM SIGPLAN Java Grande Conference, pp.51-57, June 1999. (with T. von Eicken). —. J-Kernel: A capability-based operating system for Java, in secure internet programming: Security issues for distributed and mobile objects. LNCS, Springer Verlag, (1999), 369-396 (with T. von Eicken, G. Czajkowski, C. Hawblitzel, D. Hu, and D. Spoonhower). —. MRPC: A high performance RPC system for MPMD parallel computing. Software—Practice and Experience 29, 1, John Wiley & Sons (Jan 1999), 44-66 (with G. Czajkowski and T. von Eicken). —. Security versus performance tradeoffs in RPC implementations for safe language systems. Proceedings of ACM SIGOOPS European Workshop (Sept 1998) (with G. Czajkowski, C. Hawblitzel, D. Hu and T. von Eicken). —. Resource management in extensible internet servers. Proceedings of ACM SIGOPS European Workshop (Sept 1998) (with G. Czajkowski, C. Hawblitzel, D. Hu and T. von Eicken). Chen, W. Failure detection and consensus in the crash-recovery model. Proceedings of the 12th International Symposium on Distributed Computing, Lecture Notes on Computer Science, Springer-Verlag, (Sept 1998), 231-245 (with M. Aguilera and S. Toueg) —. Using the heartbeat failure detector for quiescent reliable communication and consensus in partitionable networks. Theoretical Computer Science, Elsevier Science 220, 1 (June 1999), 3-30 (with M. Aguilera and S. Toueg). Chu, F. A decision-theoretic approach to reliable message delivery. Proceedings of the 12th International Symposium on Distributed Computing (Sept 1998), 89-103 (with J. Halpern). —. Reducing $\Omega$ to $\Diamond \mathcal{W}$. Information Processing Letters 67 (Sept 1998), 289-293 —. Least expected cost query optimization: An exercise in utility. Proceedings of the 18th ACM Symposium on Principles of Database Systems (May 1999), 138-147 (with J. Halpern and P. Seshadri) Czajkowski, G. JRes: A resource accounting interface for Java. ACM Conference on Object Oriented Languages and Systems (OOPSLA'98) (Oct 1998) (with T. von Eicken). —. Resource Management for Extensible Internet Servers. Eight ACM SIGOPS European Workshop (Sept 1998) (with C-C. Chang, C. Hawblitzel, D. Hu, and T. von Eicken). —. Resource Control for Database Extensions. The Fifth USENIX Conference on Object Oriented Technologies and Systems. (May 1999) (with T. Mayr, P. Seshadri, and T. von Eicken). Glew, N. Type-safe linking and modular assembly language. In the Twenty-sixth ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (Jan. 1999), 250-261 (with G. Morrisett) —. TALx86: A realistic typed assembly language. ACM SIGPLAN Workshop on Compiler Support for System Software (May 1999), 25-35 (with G. Morrisett, K. Crary, D. Grossman, R. Samuels, F. Smith, D. Walker, S. Weirich and S. Zdancewic) Grossman, D. TALx86: A realistic typed assembly language. In 1999 ACM SIGPLAN Workshop on Compiler Support for System Software, Atlanta, GA, USA, May 1999. pages 25-35 (with G. Morrisett, K. Crary, N. Glew, D. Grossman, R. Samuels, F. Smith, D. Walker, S. Weirich and S. Zdancewic) —. JDuck: Building a software engineering tool in Java as a CS2 project. Proceedings of SIGCSE '99 (Mar 1999) (with M. Godfrey) Hawblitzel, C. Type System Support for Dynamic Revocation. ACM SIGPLAN Workshop on Compiler Support for System Software (May 1999) (with T. von Eicken). Holland-Minkley, A. Verbalization of high-level formal proofs. Sixteenth National Conference on Artificial Intelligence (July 1999), 277 (R. Barzilay and R. Constable). Kempe, D. On the power of quantifiers in first-order algebraic specification. in G: Proceedings of the 12th International Workshop on Computer Science Logic, CSL'98 Springer LNCS 1584 (Gottlob, E.Grandjean, K.Seyr, eds.) (with A. Schoenegge). —. On the weakness of conditional equations in algebraic specification. Combinatorics, Computation & Logic. Proceedings of DMTCS'99 and CATS'99. Australian Computer Science Communications 21, 3 (C.S. Calude, M.J. Dinneen, eds.) (with A. Schoenegge). Kettnaker, V. Bayesian multi-camera surveillance. Proc. of IEEE CVPR '99 2 (1999), 253-259 (with R. Zabih). —. Minimum-entropy models of scene activity. Proc. of IEEE CVPR '99 1 (1999), 281-286 (with M. Brand) —. Counting people from multiple cameras. Proc. of IEEE ICMCS '99 (1999) (with R. Zabih). Kumar, A. Wavelength conversion in optical networks. ACM SODA (Symposium on Discrete Algorithms) (Jan 1999), 566-575 (with J. Kleinberg). Li, L. Contour extraction of moving objects. 14th International Conference on Pattern Recognition, Vol. II (August 1998), 1427-1432. (with L. Qiu and L. Li). Mardis, S. Combining error-driven pruning and classification for partial parsing. Proceedings of the Sixteenth International Conference on Machine Learning (1999) (with C. Cardie and D. Pierce). Mayr, T. Client-site query extensions. SIGMOD Conference 1999 (1999), 347-358 (Tobias Mayr and Praveen Seshadri) —. Resource control for Java database extensions. COOTS '99 (1999), 85-98 (G.. Czajkowski, P. Seshadri and T. von Eicken). Millett, L. Slicing Promela and its applications to model checking, simulation, and protocol understanding. Proceedings of the 4th Workshop on Automata Theoretic Verification with the SPIN Model Checker (Nov 1998), 75-83. (with T. Teitelbaum) —. Channel dependence analysis for slicing Promela. Proceedings of the International Symposium on Software Engineering for Parallel and Distributed Systems (May 1999), 52-61 (with T. Teitelbaum). Wagstaff, K. Noun phrase coreference as clustering. Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC) (June 1999), 82-89 (with C. Cardie). Walker, D. Typed memory management in a calculus of capabilities. Proceedings of the Twenty-sixth Symposium on Principles of Programming Languages (Jan. 1999), 262-275 (K. Crary, G. Morrisett). —. TALx86: A realistic typed assembly language. ACM SIGPLAN Workshop on Compiler Support for System Software (May 1999), 25-35 (with G. Morrisett, K. Crary, N. Glew, D. Grossman, R. Samuels, F. Smith, S. Weirich, S. Zdancewic). Wang, J. A survey of Web caching schemes for the Internet, to appear in ACM Computer Communication Review (CCR). —. Efficient and accurate Ethernet simulation. To appear in Proc. of the 24th IEEE Annual Conference on Local Computer Networks (LCN'99) (1999) (with S. Keshav). Weirich, S. Intensional polymorphism in type erasure semantics. Proceedings of the Third ACM Sigplan International Conference on Functional Programming (ICFP '98) (Sept 1998), 301-312 (with K. Crary and G. Morrisett) —. TALx86: A realistic typed assembly language. ACM SIGPLAN 1999 Workshop on Compiler Support for System Software (WCSSS '99) (May 1999), 25-35 (with G. Morrisett, K. Crary, N. Glew, D. Grossman, R. Samuels, F. Smith, D. Walker and S. Zdancewic). Zdancewic, S. TALx86: A Realistic Typed Assembly Language. ACM SIGPLAN Workshop on Compiler Support for System Software (May 1999), 25-35 (with G. Morrisett, K. Crary, N. Glew, D. Grossman, R. Samuels, F. Smith, D. Walker and S. Weirich).
CommonCrawl
From individual to collective behaviours: exploring population heterogeneity of human mobility based on social media data Yuan Liao ORCID: orcid.org/0000-0002-6982-16541, Sonia Yeh1 & Gustavo S. Jeuken2 This paper examines the population heterogeneity of travel behaviours from a combined perspective of individual actors and collective behaviours. We use a social media dataset of 652,945 geotagged tweets generated by 2,933 Swedish Twitter users covering an average time span of 3.6 years. No explicit geographical boundaries, such as national borders or administrative boundaries, are applied to the data. We use spatial features, such as geographical characteristics and network properties, and apply a clustering technique to reveal the heterogeneity of geotagged activity patterns. We find four distinct groups of travellers: local explorers (78.0%), local returners (14.4%), global explorers (7.3%), and global returners (0.3%). These groups exhibit distinct mobility characteristics, such as trip distance, diffusion process, percentage of domestic trips, visiting frequency of the most-visited locations, and total number of geotagged locations. Geotagged social media data are gradually being incorporated into travel behaviour studies as user-contributed data sources. While such data have many advantages, including easy access and the flexibility to capture movements across multiple scales (individual, city, country, and globe), more attention is still needed on data validation and identifying potential biases associated with these data. We validate against the data from a household travel survey and find that despite good agreement of trip distances (one-day and long-distance trips), we also find some differences in home location and the frequency of international trips, possibly due to population bias and behaviour distortion in Twitter data. Future work includes identifying and removing additional biases so that results from geotagged activity patterns may be generalised to human mobility patterns. This study explores the heterogeneity of behavioural groups and their spatial mobility including travel and day-to-day displacement. The findings of this paper could be relevant for disease prediction, transport modelling, and the broader social sciences. Understanding travel behaviour can provide insights for a wide range of disciplines, including urban planning [1], transport management [2], epidemiology [3], ecology, and social science [4]. Previous travel behaviour research has used cross-sectional data [5], such as from household travel surveys. Although it is one of the most prevalent data sources, surveys are costly to collect and therefore typically suffer from small sampling rates, short survey duration, under-reporting of long-distance trips [6], and long lag times between data collection and data availability [7]. Despite some drawbacks, travel surveys contain socio-demographic information and detailed activity records that make them difficult to replace by emerging data sources [6]. Those characteristics enable researchers to examine population-level mobility determinants and large-scale changes in daily mobility. Traditional travel surveys also contain rich explanatory variables that enable the validation/calibration that is essential for utilising emerging data sources. The rapid development of information and communication technology (ICT) has the potential to address some of the shortcomings mentioned above and broaden the types of questions that can be explored in travel behaviour studies [8]. Emerging data sources, such as records from Global Positioning System (GPS) devices, smart cards, mobile phones, and other online systems, have deepened the understanding of human mobility [9, 10]. Among the emerging data sources, social media data are being gradually accepted as user-contributed data sources in travel behaviour studies, such as activity pattern classification [11], large-scale urban activity [12], and mobility patterns [13]. Geotagged tweets from the Twitter platform represent one type of social media data. A tweet is a short social media text message associated with a unique user on the Twitter platform, and a geotagged tweet also contains the GPS coordinates if the user allows this information to be attached to the tweet. The number of geotagged tweets is low compared to the total number of tweets, with one study finding around 1-3% in Syria [14]. Similarly in our previous study, we also found that geotagged tweets accounted for a limited proportion of overall Twitter users, e.g., 7.4% (George, South Africa), 1.9% (Barcelona, Spain), 1.1% (Kuwait), and 0.3% (Sweden) [15]. The number of geotagged tweets per user also varies among countries. Median (and the 5%th - 95%th percentile in parenthesis) values over a six-month sampling period are 9 (1-190) (Kuwait), 2 (1-50) (Australia), 2 (1-41) (Sweden), and 2 (1-20) (Barcelona, Spain) etc [15]. Despite that, geotagged tweets have proved a useful proxy for tracking and predicting human movement [10]. Such a data source provides precise location information [10], easy and free access [16], and opportunities for continuous tracking activities without a predefined geographic boundaries such as national borders or administrative boundaries [17]. The main criticisms are biased population representation [18] and behaviour distortion [19, 20] regarding when and where locations are reported via geotagged tweets. Some studies have compared multiple data sources to identify or adjust the biases [20, 21] and to validate against "ground truth" [22]. Despite some disadvantages of geotagged tweets, one recent review highlights the usefulness of such data sources for modelling travel behaviour [16] and understanding social behaviors such as urban neighbourhood isolation [23]. Geotagged tweets can be obtained by purchasing the complete set of public tweets from Twitter Firehose, using the Streaming API for up to a maximum of 1% of public tweets, or retrieving user timelines by user name/ID for up to 3200 historical tweets that are set publicly accessible by the user [24]. Geotagged tweets are often limited to a geographical bounding box such as national borders or adminstrative boundaries when collected from the Streaming API, yielding a lateral dataset that covers a large number of Twitter users for a snapshot of time. If the movement of a user occurs across or outside the bounding box, it is not captured with this method. Geotagged tweets collected from user timelines do not have this geographical boundary limitation, and the historical tweets of a specified user can be collected in a few seconds. These tweets can cover multiple years, creating a longitudinal record of an individual's locations without any geographical boundaries [24]. Non-recurrent mobility that is often under-reported in a one-day travel diary (e.g., tourists' mobility [25]) can be studied using this type of data. It is also feasible to scale up the number of Twitter users to study the influence of global cities [26]. Geotagged social media data have been criticised for non-representativeness due to population bias and behaviour distortion. It is found that Twitter users in the U.S. over-represent dense population regions and are predominantly male [27]. Behaviour distortion involves both tweeting behaviour and reasons for geotagging, both of which can lead to non-representativeness. The time sparsity of tweets causes the trajectory of geotagged tweets collected from users to be incomplete compared with the actual mobility trajectory of those users. One recent study shows that people geotag consciously and intentionally in uncommon places, and they often geotag soon after arriving at the place [20]. These biases need to be considered before drawing any conclusions from these data sources. Social media data have been used to study both aggregate mobility behaviour and individual-based activity behaviour [16]. Representative studies are summarised in Table 1. At the aggregate level, studies have shown that social media data can be a reasonable proxy for population mobility. Studies have used social media data to demonstrate the truncated power law of trip distance distribution [28] and Zipf's law of the visitation frequency, which describes people's tendency to return to a couple of locations they frequently visit [29]. Studies generally found good agreements cross validating social media data against other data of higher time resolution, such as mobile phone call detail record (CDR) [22]. At the individual level, studies have used geotagged social media data to infer activity purpose. Such studies usually have specified application, e.g., bike sharing behaviour [30], prediction of next location [31], and lifestyle behaviour [32, 33]. Social media data have been used to identify activity choice patterns [11, 32] and to recognize specific activities [34], combining spatial and temporal information with semantic information in the data. Table 1 Representative studies of travel behaviour using social media data. T is the time span covered by the dataset. S represents data source. On the Angle column, A indicates aggregate/lateral level and I indicates individual/longitudinal level. On the S column, F indicates Foursquare and T indicates Twitter Previous travel behaviour studies using social media data have been limited by data collection and a lack of understanding of population heterogeneity. Most studies are based on data collected within a specified geographical bounding box over a short time range (Table 1). Use of a geographical bounding box, such as national borders or administrative boundaries, precludes capture of trips outside or across the boundary of the box, biasing the resulting data toward short-distance travel. On population heterogeneity, most studies of aggregate population behaviours neglect individual differences, while studies of individual mobility usually neglect common features that drive similar behaviours across groups of individuals. Although the travel behaviours of individuals in any population are neither identical to nor independent of each other's, there has been little work on combining aggregate and individual perspectives to gain new insights about travel behaviours of a heterogeneous population. Understanding travel patterns across scales from individuals to a population is the next step in understanding urban mobility and social behaviour [4], especially when comparing mobility in different cities [38]. This paper reveals the population heterogeneity of geotagged activity patterns using a long-term dataset without any geographical boundaries, such as national borders or administrative boundaries. Specifically, this study attempts to answer the following three questions: Are there any distinct patterns that characterise the observed individual geotagged activities? What are the spatial and temporal characteristics derived from different geotagged activity patterns? Can geotagged tweets be used as a proxy to approximate the mobility patterns of different behavioural groups? The dataset includes 652,945 geotagged tweets generated by 2,933 Swedish Twitter users covering time spans of more than one year (3.6 years on average). We first describe the geotagged tweets dataset and validate it against a household travel survey. To identify the population heterogeneity of geotagged activity patterns, we combine aggregate and individual analysis techniques: we first analyse the geotagged trajectories of each user to classify them regarding their activity patterns, and then we conduct an aggregate analysis for each group. We characterise the features of individual trajectories of geotagged tweets using both geographical and network properties. The features describing users' activity patterns are based on those found in the literature. Hierarchical clustering is a descriptive data mining method that can produce new, non-trivial classifications of users based on the available dataset [39]. Patterns of individual trajectories are thus grouped into four categories: local returners, global returners, local explorers, and global explorers. The spatial and temporal characteristics of these four groups of individuals are explored. We use two datasets in this study: geotagged tweets from Twitter and individual trip information from the Swedish National Travel Survey. We use the household travel survey data to investigate the representativeness of geotagged tweets via a descriptive analysis, comparing spatio-temporal characteristics (behaviour distortion) and the population distribution (population biases). The rest of this section introduces the methodology that identifies the population heterogeneity of human mobility, as shown in Fig. 1. Six spatial features are proposed to describe the individual geotagged activity patterns in the feature construction. Based on the geotagged activity trajectories, the features are calculated per user, and hierarchical clustering is applied. We identify four groups of users with distinct geotagged activity patterns. We further apply down-sampling to test the impact of geotweeting frequency on the group identification. Methods structure. RQ1—Are there any distinct patterns that characterise the observed individual geotagged activities? RQ2—What are the spatial and temporal characteristics derived from different geotagged activity patterns? RQ3—Can geotagged tweets be used as a proxy to approximate the mobility patterns of different behavioural groups? Data collection and pre-processing In a previous study, we used the Gnip database to identify 5000 non-commercial Twitter users who geotagged their tweets most frequently during a six-month period (20 December 2015–20 June 2016) within the geographical bounding box of Sweden [15]. Gnip is a Twitter subsidiary which sells historical tweets in bulk and provides access to the Firehose API. We extract these top users' historical tweets (without applying a spatial boundary) from their user timelines [40]. The data are limited to 3200 tweets per user. This method produces a varied time span and varied tweet number, since not all users reached the 3200-tweet maximum. Because the tweeting frequency varies among users, the time span collected per user also varies: the higher the tweeting frequency, the shorter the time span collected from a user. We further apply the following rules to pre-process the data to ensure that the individuals included in the study live in Sweden and have a substantial number of geotagged tweets so we can reasonably capture their activity trajectories: (1) the covered time span is above 1 year, (2) the geotweeting frequency (geotagged tweets/day) is above 0.1, or the total amount of geotagged tweets is above 50, and (3) the most frequently visited locations is in Sweden. After screening, we identify 2926 users and 652,945 geotagged tweets. Using Twitter data, a "trip" is defined as the trajectory between two consecutive geotagged tweets generated by the same user. A trip in this study is equivalent to displacement in some previous studies. Waiting time is defined here as the time interval between two consecutive actions (geotagged tweets in this context) by the same individual [41]. A trip should also have a distance larger than 10 m given the precision of GPS coordinates generated by Twitter. Swedish travel survey The survey data come from the Swedish National Travel Survey for the years of 2011–2014 [42]. The survey data are used to compare the trip length distribution with those derived from geotagged tweets. It consists of a total of 31,457 travel diaries spanning the period of a day, with detailed information on individual trip distance, travel time, mode of transportation, and trip purpose [15]. The travel survey also includes a separate dataset containing a total of 9024 trips during 60 days from the same group of participants as in the travel diary data. These include trips that are either longer than 100 km or to neighbouring countries with a distance shorter than 100 km. To be consistent when comparing Twitter data with the survey data, Twitter data are filtered to either only include domestic trips beginning and ending on the same day, with distances longer than one kilometre (minimum distance in the survey), or international trips. Geotagged activity pattern: feature construction The locations that user i visited are first captured using all geotagged tweets by user i with the time stamps: \((X,Y,t)_{i,k}\), \(k = 1,2,\ldots,N_{i}\) where X is the decimal degree of Latitude, Y is the decimal degree of Longitude, t the time stamp (UTC) of the kth location. We define dom as the indicator to show whether the location is within Sweden: \(\operatorname{dom} = 0\) is outside Sweden and \(\operatorname{dom} = 1\) is in Sweden. \(N_{i}\) is the total number of locations visited by the user i through his/her geotagged tweets, and \(T_{i}\) is the total captured time span of user i. With \(t_{l}\) as the local time of the tweets, we further calculate the month variable \(m\in [1,12]\), the weekday variable w (weekday = 1 and weekend = 0), and the hour of the day, \(h\in [1,24]\). The time sequence of user's locations (user trajectory) is therefore: $$\begin{aligned} S_{i}=(X,Y,t,t_{l},m,w,h,\operatorname{dom})_{i,k},\quad k=1,2, \ldots ,N _{i}. \end{aligned}$$ For user i, the number of distinct locations is smaller than or equal to the total number of locations user i visited. Let \(n_{i}\) be the number of distinct locations, \(f_{i,j}\) be the visiting frequency of location j, and \(T_{i,j}\) be the time interval of two visits of the location j. The vector of visited distinct locations is therefore: $$\begin{aligned} \mathbf{L}_{i} = (X,Y,f,\mathbf{T},\mathbf{m},\mathbf{w}, \mathbf{h})_{i,j},\quad j=1,2, \ldots ,n_{i}. \end{aligned}$$ A trip, the connection between two consecutive geotagged tweets generated by the same user, is represented by the arc connecting two consecutive geotagged tweets with locations \(j-1\) and j. (If \(j-1\) and j are within 10 metres of each other or the tweets are within 10 minutes, these are considered to be the same location and not a distinct trip.) The arc connecting these locations has a Haversine distance (distance along the curved surface of the earth), \(d > 10\) m, and time interval \(\triangle T > 10 \min \) between the tweets. For each trip, if location \(j-1\) and j are located within Sweden, that trip is defined as a domestic trip, \(\operatorname{dom} = 1\), and if location \(j-1\) and j are located outside Sweden, \(\operatorname{dom} = 2\), otherwise \(\operatorname{dom} = 0\). The origin-destination matrix that is based on the trajectory of geotagged tweets of user i (\(\mathbf{ODM}_{i}\)) is a directed graph with the trip attributes shown below. $$\begin{aligned} \mathbf{ODM}_{i} = (f, d, \operatorname{dom})_{p,q},\quad p,q=1,2, \ldots ,n _{i}. \end{aligned}$$ Based on the literature review, we propose two essential aspects: how far one travels and how actively one explores new locations. To do pattern mining, we need to find proper summary statistics as the features to characterise the geotagged activity patterns. Therefore, we first examine the underlying distribution of the trip distance (\(d _{i,j}, j = 1,2,\ldots,n_{i}\)) and the network node degree (\(f_{i,j}, j = 1,2,\ldots,n_{i}\)) for all the individuals' geotagged activity trajectories. Specifically, we compare the theoretical distribution that best fits the empirical distribution to see whether the empirical distribution is heavy-tailed [43]. It turns out most users' trip distance and network node degree follow a heavy-tailed distribution, such as the distribution of Cauchy, Lévy, Burr, and Pareto. To deal with the highly skewed data, log transformation is applied to variables \(f_{i,j},d_{i,j}, j = 1,2,\ldots,n _{i}\) to calculate the log-mean and the log-variance. The summary statistics below are proposed to quantify the key characteristics of Twitter users' geotagged activity patterns. Six features of geographical characteristics and network properties are proposed to represent an individual geotagged trajectory. Geographical characteristics are described by features \(r_{g}\), \(D_{o}\), and d. Radius of gyration, \(r_{g}\) (km), refers to the travel distance range weighted by the visiting frequency. The total radius of gyration \(r_{g}\) is defined as: $$\begin{aligned} r_{g} = \sqrt{\frac{1}{n_{i}}\sum _{q=1}^{n_{i}}f_{q}\cdot ( \mathbf{r}_{q}-\mathbf{r}_{\mathrm{cm}})^{2}}, \end{aligned}$$ where \(\mathbf{r}_{q} = [X1,X2]_{q}\) and the mass center of the visited locations: $$\begin{aligned} \mathbf{r}_{\mathrm{cm}} = \biggl[\frac{\sum_{q=1}^{n_{i}}(X_{q}\cdot f_{q})}{ \sum_{q=1}^{n_{i}}X_{q}},\frac{\sum_{q=1}^{n_{i}}(Y_{q}\cdot f_{q})}{ \sum_{q=1}^{n_{i}}Y_{q}} \biggr]. \end{aligned}$$ Location distance variance, \(D_{o}\) (km), refers to the geographical dispersion degree of visited locations. \(\mathbf{ODM}_{d} = (d_{p,q})\) represents the linear-scale trip distance matrix where \(d_{p,q} = d_{q,p}\), \(d_{p,q} = 0, p=q\). The log-transformed trip distance matrix is indicated by \(\mathbf{ODM} _{d}^{\prime }= (\log (d_{p,q}))\). The only zero elements of the linear-scale \(\mathbf{ODM}_{d}\) entries are on the diagonal for which, instead of using additive smoothing, we retain them on the diagonal of the log-transformed \(\mathbf{ODM}_{d}^{\prime }\). \(\mathbf{ODM}_{d} ^{\mathrm{norm}} = d_{p,q}^{\mathrm{norm}}\) is defined as: $$\begin{aligned} \mathbf{ODM}_{d}^{\mathrm{norm}} = \mathbf{ODM}_{d}^{\prime }- \Biggl( \sum_{p=1}^{n_{i}}\sum _{q=1}^{n_{i}}\log (d_{p,q})\cdot f_{p,q} \Biggr)*\frac{ \mathbf{J}_{n_{i}}}{n_{i}^{2}}, \end{aligned}$$ where \(J_{n_{i}}\) the unit matrix. So the location distance variance \(D_{o}\) is defined by: $$\begin{aligned} D_{o}=\sqrt{\frac{\sum_{p=1}^{n_{i}}\sum_{q=1}^{n_{i}}d_{p,q}^{ \mathrm{norm}}}{n_{i}^{2}}}. \end{aligned}$$ Mean value of log-transformed trip distance, d (km), refers to the average log-transformed distance between two consecutive geotagged tweets, defined as: $$\begin{aligned} d=\frac{\sum_{k=2}^{N_{i}}\log (d_{k-1,k})}{N_{i}-1}. \end{aligned}$$ Network properties are described by feature clustering coefficient (C̅), average node degree (z), and normalised node degree (\(z_{m}\)). Clustering coefficient (average), C̅ (−), refers to the degree to which the neighbours of a given node link to each other [44, p. 63]. For a node (location) j with degree (visiting frequency) \(f_{i,j}\), its local clustering coefficient is defined as: $$\begin{aligned} C_{j}=\frac{2L_{j}}{f_{j}(f_{j}-1)}, \end{aligned}$$ where \(L_{j}\) indicates the number of links between the \(k_{j}\) neighbours of node j. The average clustering coefficient of the whole network is calculated by: $$\begin{aligned} \overline{C}=\frac{1}{n_{i}}\sum_{j=1}^{n_{i}}C_{j}. \end{aligned}$$ The mean value of the log-transformed node degree, z (−), represents the overall visiting frequency. Each visited location is seen as one node in the network, and the visiting frequency is equivalent to the node degree; therefore, the average value of the node degree z is one important indicator of the network properties. It is defined as: $$\begin{aligned} z=\frac{\sum_{j=1}^{n_{i}}\log (f_{j})}{n_{i}}. \end{aligned}$$ \(z_{m}\) (−) is the max node degree divided by the sum of total degrees, which indicates the how centralised the overall visited locations are. The normalised max node degree \(z_{m}\) is defined as: $$\begin{aligned} z_{m}=\frac{\max [f_{j}]}{\sum_{j=1}^{n_{i}}f_{j}}. \end{aligned}$$ The proposed features are summarised in Table 2. The geographical features reflect how far people travel and all have the same units, km. The radius of gyration, which combines the locations' geographical distribution and their visiting frequency, has been widely applied to characterise human mobility patterns [28, 45]. \(D_{o}\) and d describe how the visited locations are distributed geographically. d indicates the average distance between trips and \(D_{o}\) quantifies the variation of trip distance. Not all the visited locations are fully connected with each other. Hence in the calculation of d and \(D_{o}\), distance is only counted when there exists a connection between two consecutive geotagged tweets. Table 2 Spatial features characterising individual's activity patterns. The node degree (a) is equivalent to the location visiting frequency The network features describe properties [26] which characterise how actively people explore new locations. Individual trajectories create a complex network, where a node represents one visited location. The clustering coefficient is a measure of the degree to which locations in such a complex network tend to cluster together. For each visited location, one important property is how frequently the users return to a visited location on average (z (−)) as not every location is evenly visited. The frequency of the most visited location reflects how centralised the complex network is (\(z_{m}\) (−)). Potential user groups: hierarchical clustering and down-sampling We apply hierarchical clustering to the six spatial features. The clustering procedure involves: (1) data standardisation, (2) distance calculation, (3) linkage establishment, and (4) splitting the linkage into clusters. For data standardisation, the min-max method is applied to each feature. For the distance calculation, the squared Euclidean distance is applied [46]. For cluster method of linkage establishment, Ward's method is used [47]. Sensible clustering is measured by the small sum of squares of deviations within the same cluster. By limiting the cluster distance larger than a certain threshold, the final clusters are formulated. The average silhouette width provides an evaluation of clustering validity [48]. As a result of hierarchical clustering, each user is categorised into a group with certain characteristics. To test the impact of geotweeting frequency on the group identity, random down-sampling is applied to raw individual trajectories of geotagged activities. The features are re-calculated based on the down-sampled trajectories. The same procedure of hierarchical clustering is applied to the updated feature sets to re-identify the behaviour group of individuals. In this section, we first briefly summarise the geotagged activity dataset regarding their spatial and temporal characteristics. Two clustering analyses are applied, resulting in four combinations of clusters that categorise users by two independent aspects of geotagged activities: geographical characteristics and network properties. We present the features of these four categories and visualise the typical network structures of the four categories. We further present the statistical characteristics of the users from each category. Based on these four groups of Twitter users, we present their trip distances and diffusion processes in space and time. Descriptive analysis of geotagged activity dataset and its comparison with travel survey Geotagged tweets of the Swedish users are collected without applying any geographical boundaries (Fig. 2(A)). A large proportion of geotagged locations are in Sweden (Fig. 2(B)). The ratio of distinct locations quantifies the variation level of geotagged locations for each user (Fig. 2(D)). The more geotagged locations that are outside the habitually visited locations, the larger the variation level. At the extreme, if the ratio is 1, the geotagged locations are purely random and we have no information on frequently visited locations such as workplace or home. The spread of the distribution, shown in Fig. 2(D), suggests that the proportions of distinct locations are evenly spread out between 0 and 1 among users. Characteristics of geotagged activity of Swedish users. (A) Geotagged activity Origin-Destination Matrix (ODM) on the map. Each point represents a region formulated by DBSCAN clustering with the distance threshold for merging as 1 km and the minimum number of location for a region is set as 1 [10, 49]. (B) Geotagged activity ODM on the map where the trip has both origin and destination located within Sweden. (C) A week-long geotagging activity pattern that captures the time-dependent characteristic of geotagged locations (average of all the users). The warmer the colour (e.g. red and orange), the higher number of geotagged locations. (D) The distribution of the ratio of distinct geotagged locations over total geotagged locations (individually calculated). (E) Daily distributions of visiting frequency of the top two most visited locations, weekday vs. weekend (adjusted by the overall distribution of geotagged tweeting frequency over seven days across a week) We assume that the first and the second most visited locations by users are either work or home. These two locations have distinct temporal distributions in a day. We apply a hierarchical clustering to the instances of users' daily time distribution of visiting frequency for these two locations. We find two significantly different patterns that fit work and home respectively (Fig. 2(E)). Individual geotagged activity is unevenly distributed in time (Fig. 2(C)). People's weekend activity is more dispersed and they spend less time at the two most visited locations; therefore, the frequencies of visits are lower compared to weekdays. Figure 3 shows how representative those Twitter users are regarding their estimated home locations compared with the Swedish travel survey and with the Census population. Not surprisingly, compared with the general population, the top Twitter users in Sweden seem to over-represent the residents in Stockholm county, while the rest of the top Twitter users seem to be distributed similarly to the population distribution (Fig. 3(A)). Compared with the travel survey (Fig. 3(B)), the top Twitter users are more concentrated in Stockholm and Malmö, the third biggest city but under-represent the residents in Västra Götaland county where the second biggest city Gothenburg is located. It is worth noting that the design of travel survey can over- or under- sample certain population segments depending on the expected response rate, usage patterns etc., in order to get representative samples. County-level geographical representativeness of estimated home locations from Twitter data: percentage value difference. (A) Twitter users vs. residents (Twitter minus Census population) [50]. (B) Twitter users vs. Swedish travel survey participants (Twitter minus survey) Behavioural categories There are four categories identified through two clustering analyses (see Fig. 4), one for geographical characteristics (namely global vs. local) and one for network properties (returner vs. explorer). Global returner. Geotagged locations are geographically remote and diverse. These individuals generate high proportions of international trips (the destination is outside of Sweden). They also exhibit a centralised network structure. We call this group of individuals global returner. Global explorer. Geotagged locations are geographically remote and diverse. Like global returners, these individuals frequently travel internationally; however, their visited locations are distributed in a more decentralised way, i.e., the locations are more evenly geotagged. We call this group of individuals global explorer. Local returner. Geotagged locations are not geographically remote or diverse. These individuals usually visit locations near and connected to a frequently visited centre. The more clustered sub-structures in their location network reflect their occasional explorations around a centralised location. We call this group of individuals local returner. Local explorer. Geotagged locations are not geographically remote or diverse. There are multiple locations that these individuals visited more frequently than the other locations, and those locations are relatively distant from each other, so the trip distances between them is large. Nevertheless, overall the visited locations are less centralised. Most users are in this category, which we call local explorer. Clustering results. (A) Clustering dendrogram. The clusters are formulated by splitting groups into clusters when the cluster distance is above five. (B) Kernel density estimation of cluster feature distributions of geographical characteristics and network properties. The height of the one-dimensional distribution is proportional to the fraction of the number of individuals in each cluster over the total individuals. The Silhouette width for both clustering results is above 0.5, indicating that the identified categories are sensible Figure 5 shows the network visualisation of four typical users' trajectories. To better illustrate the network structure, the location position is displayed optimally rather than according to its geographical position. The returners visit different places, centring on a large-degree node (frequently visited location). The chain structure of the explorers is characterised by the lack of a recognisable centre, implying a low returning rate. It is worth noting that the returners have more clustered sub-structures that correspond to daily mobility, i.e., people move near home locations for regular activities (e.g., commuting and shopping), and move around the locations of those regular activities [51]. Network visualisation of four representative individuals from each behavioural group. Each node represents one visited location. The diameter of the node is proportional to the node degree A statistical summary of the four categories is shown in Table 3. It shows an imbalanced distribution of Twitter users across four groups: most of them are local explorers (78.0%), followed by local returners (14.4%), while the rest are global explorers (7.3%) and global returners (0.3%). The ratio of domestic trips (dom), the returning rate of the most frequently visited location (R), and the geotweeting frequency (\(F_{g}\)) are different between categories (Kruskal–Wallis test, \(p < 0.001\)). The Mann–Whitney U test is applied to test the variable difference between each pair of categories. Regarding dom, a significant difference is found across all category pairs (\(p < 0.001\)). As for R and \(F_{g}\), there is no significant difference between global returner vs. local returner, and global explorer vs. local explorer. That finding indicates that a high returning rate and frequent geotweeting behaviour are associated with the centralised network structure of geotagged locations. The estimated home locations of different behaviour groups show an interesting spatial pattern (Fig. 6). Compared to overall Twitter users, local returners concentrate more in Stockholm and Malmö. Local explorers concentrate more in the middle of Sweden. Global returners only account for a small proportion of total users, and their geographic distribution is close to the overall studied Twitter users. Geographical distribution of estimated home locations of four behavioural groups (county level). The displayed percentage by county is the value of each group minus overall users' value. The warmer the county is shaded, the more its residents are represented in a certain group Table 3 Statistics of four behaviour groups. dom represents the percentage of trips where both the origin and destination are in Sweden (0), among the destination and the origin, there is one location outside Sweden (1), and both the origin and destination are outside of Sweden (2). R denotes the ratio of visiting frequency of the most frequently visited location over the total number of geotagged locations. \(F_{g}\) denotes the geotweeting frequency The impact of geotweeting frequency on group identification Table 3 shows a significant difference in geotweeting frequency between returners and explorers. It is possible that this difference affects the network properties of these users, and thus their group identity, i.e., if the returners' tweeting frequency is reduced to the same rate as explorers, there is a chance that they will be categorised as explorers without changing their actual travel behaviours. To test the above assumption, we randomly remove 50% of the geotagged tweets from the individuals' original trajectories. Then we calculate the features based on their new geotagged activity trajectories and apply hierarchical clustering to get the new behavioural group. The results are shown in Fig. 7(A). The down-sampling has changed the group identity of a small proportion (around 25%) of users (Fig. 7(A)). The most frequent group change is from local explorer, the largest identity group, to local returner, and from local explorer to global explorer. Figure 7(B) shows the distribution of geotweeting frequency across four behavioural groups. Returners have higher geotweeting frequency in general, however, the group changes are not related to their geotweeting frequency (Fig. 7(C)). Hence, the assumption that the distinct patterns of four user groups are solely due to their difference in geotweeting frequency does not hold. We conclude that the group identities of the users are robust regardless of the users' geotweeting frequency. Reclassification results. (A) Group changes by the original groups. GE = Global explorer, GR = Global returner, LE = Local explorer, LR = Local returner. (B) Geotweeting frequency of four behavioural groups. (C) Geotweeting frequency of the users who are re-identified as the original group vs. the ones who are not Collective behaviours: trip distance and diffusion in space In this section, we aggregate all trips within each user category to explore the collective behaviours of the four behavioural groups. The definition of a trip in the context of geotagged activity, as defined in Sects. 2.1.1 and 2.2, is different from the one in the Swedish travel survey (one-day diary). A trip in Twitter data is the connection between two consecutive geotagged tweets of the same user. It provides incomplete mobility information of individuals because of the spatiotemporal sparsity of tweets. Despite that, at an aggregate level and over large samples, studies generally find good agreements of trip distance comparing Twitter data vs. other sources of data including Call Detail Records (CDR) and censuses [22]. The minimum trip distance for the travel survey data is 1 km [42]. To be comparable with the survey, the Twitter data is reanalysed with a minimum trip distance also set to 1 km and a time frame of 24 hours, which excludes 24.8% of previously-analysed Twitter trips. Only 0.4% trips in the Swedish travel survey are international, while geotagged tweets show 3.6% international trips on a comparable basis. The geotagged tweets approximate the 1-day travel survey data well for over 90% of the observed one-day domestic trip distances; however, the geotagged data have relatively more long-distance trips than the survey data (Fig. 8(A)). For international trips, despite the similarity between all users' distribution and the survey data, a large population variance exists between different user categories (Fig. 8(B)). The complementary cumulative distribution function (cCDF) of trip distance. (A) Domestic trips validated against the one-day travel survey. (B) International trips validated against the 60-day long-distance/international travel survey The trip distance as a function of waiting time (the time interval between two consecutive geotagged tweets by the same individual) is shown in Fig. 9. The trip distance generally increases with the waiting time over a multiple-day period at a decreasing rate to up to 7 days (Fig. 9). The correlation between trip distance and waiting time suggests that the observed trip distance increases with waiting time. The diffusive nature of human mobility and the returning effect (e.g., return to home or return to work) create two distinct mechanisms that interact with each other: the diffusion effect causes the observed trip distance to increase with increasing waiting time derived, and the returning effect causes some of the distances to decrease to zero periodically, i.e., every 24 hours (Fig. 9). Trip distance vs. waiting time during 7 days. (A) All travellers. (B) Local travellers. (C) Global travellers. Waiting time is defined as the time interval between two consecutive geotagged tweets generated by the same Twitter user Diffusion process The individual diffusion process is described by the time history of the radius of gyration \(r_{g}\). We first sort the distinct locations of each individual based on their visiting frequency. The \(r_{g}\) time history begins when the top location has been visited for the first time in one's trajectory of geotagged tweets, and it continues for 90 days thereafter. Previous studies have shown that \(r_{g}\) tends to stabilise within 2000 hours (around 3 months), e.g., [28]. The value of \(r_{g}\) is updated each time a geotagged tweet appears during the 90 days. Each time history is required to contain at least 10 instances of \(r_{g}\). We normalise the time sequences to the same data length (50 data points) by using nearest-neighbour interpolation to sequences shorter than 50 and randomly down-sampling the sequences longer than 50. Hence, we get a normalised 90-day sequence of \(r_{g}\) for each user who satisfies the conditions above (2303 valid users in total) [17]. Figure 10(A) shows the \(r_{g}\) of the returners compared with the explorers and the time history from the random walk process. The global travellers have a larger mobility range than the local travellers throughout the 90 days. Their mobility range also increases continuously throughout the time period, whereas the returners' mobility range tends to saturate earlier. If individual trajectories followed a random walk [52], then the radius of gyration should follow the solid black line \(r_{g}(t)\sim t^{1/2}\) in Fig. 10(A). Diffusion process. (A) Time history of radius of gyration within 90 days. The time history starts from the first time observing the most visited location; each data point indicates the mean value of \(r_{g}\) across the same group of users. (B) Visiting frequency by the ranking order of the most visited locations. The shaded range indicates the upper bound (75%) and lower bound (25%) of the cumulative frequency rate of visits Figure 10(C) shows the cumulative distribution function of the visiting frequency rate vs. the most visited locations ordered by their visiting frequency. The cumulative frequency rate reflects the regularity of users' visiting behaviour. Returners have more concentrated visits to a fewer number of locations than the explorers do. Not only does the cumulative frequency rate start higher and rise faster for returners than for explorers, but the cumulative frequency saturates around a mean value below 80% for returners compared to a mean value below 60% for explorers. The variations for explorers are higher as well (Fig. 10(B)). This study presents a picture of population heterogeneity of geotagged activity patterns through a novel combination of individual and aggregate perspectives in the analysis framework. In addition, we collect and apply the geotagged social media dataset spanning a long period and without any geographical boundaries. Four distinct geotagged activity patterns: population heterogeneity and collective behaviours In this study, we propose two essential dimensions of individual mobility, how far one travels and how actively one explores new locations. Based on the correspondingly constructed feature set, most users are identified as local explorers followed by local returners, global explorers, and global returners. Local returners are characterised by the relatively short-range trip distance compared to the global travellers. Returners' trajectories form complex networks that have more concentrated structure than explorers do. Daily mobility makes most people local travellers: they move between and around home, work, and locations of regular activities most of the time, with occasional long distance travel or travel abroad. This explains why most Twitter users are categorised as either local explorers or local returners. Those two dimensions have been explored separately in some previous studies. Using geotagged tweets, one study found two distinct types of Twitter users with low randomness and high randomness, respectively [10]. In their study, randomness represents the visiting frequency distribution across distinct locations: the more the visiting frequency spreads, the higher the randomness. But that study did not capture the other dimension, how far one travels, which can also differ among sub-populations. Another study used a high-resolution dataset from a mobile navigation app, Sygic in Australia, where two distinct groups of users were found; "travellers" who visit different areas with distinct, salient characteristics, and "locals" who cover shorter distances and revisit many of their locations [53]. But due to high-dimensional indicators, that study did not show the essential differences in human mobility which make the results less intuitive to interpret. In our study, we capture the randomness by using the network structure's properties to quantify how actively one explores new locations. The names of the four groups are inspired by previous studies suggesting a returner-explorer dichotomy in human mobility using GPS log and mobile phone data [29, 54]. Those studies showed two distinct network structures based on individual mobility trajectories: one user type recurrently travelled between many different locations (explorer) and the other had a smaller number of different locations (returner). The network structure was found to be invariant across the distances that one regularly covers (\(r_{g}\).) Based on that study, we further created geographical characteristics to quantify "how far people travel" with the attempt to achieve a more complete description of human mobility patterns. The aforementioned studies also have a similar drawback: they apply data sources that only capture individuals' mobility within a country. This incomplete tracking fails to capture international trips and narrows their contributions to domestic mobility only. The present study has no restrictions from national or adminstrative spatial boundary. We found that even with high time sparsity, social media data can still capture differences in mobility patterns across sub-populations. We illustrate the diffusion process of four groups as one aspect of the collective human mobility patterns in Fig. 10. On the one hand, trip distance increases with waiting time yet decreases at each 24-hour cycle, indicating both the returning effect and the increased probability of exploring new locations (the diffusive nature of mobility). The correlation between trip distance and waiting time agrees with the previous findings from mobile phone data [28] and Twitter data [10]. On the other hand, the time history of \(r_{g}\) highlights the differences between four types of travellers with global explorers' mobility range increasing continuously whereas the returners' mobility range tending to saturate earlier. The mechanism of stabilising \(r_{g}\) (Fig. 10(A)) has also been described in another study [28]. Randomness (the degree of location predictability) plays an important role in explorers' identity as observed in Fig. 10(B). Explorers have a lesser tendency towards stabilisation in the cumulative frequency rate of visited locations than returners, which describes the essential difference between returners and explorers. Implications of human mobility and population heterogeneity Individual mobility is defined by a person's capabilities, social network and opportunities, i.e. an individual's ability to move, social needs and desire; and the availability of transportation resources such as infrastructure [4]. The understanding of population heterogeneity will benefit a broad range of disciplines from travel behaviour modelling to social sciences. For example, heterogeneity can be applied to generate more accurate agents in transport demand modelling [55]. The continuous tracking and long-term observation of individuals as illustrated in this article can benefit disease prediction by providing a more dynamic and temporal perspective of how people diffuse in space [56] and the importance of adding population heterogeneity to improve the predictions and develop effective mitigation strategies [57, 58]. Putting individuals into different groups or places of residence according to their travel behaviour can also enable new research related to the adoption of new technology [59], etc. The population heterogeneity identified in this study can be combined with sociodemographic information of individuals or groups, e.g., race and income level, in future studies to further understand factors such as the effect of neighbourhoods on travel behaviours of individuals or groups [23], and the relationships between short-term mobility and long-term migration [60]. A study on location-based social networks shows that the shared visited locations are informative in predicting the social connections between individuals [61]. The distinct behavioural groups identified in this study can provide additional insights that contribute to the inference of friendship, such as the relationship between people's mobility and their social network, where a large proportion of places visited are within a small distance of their nearest (geographical) social ties' locations [62]. The relationship between social ties and mobility can be further explored to form a more complete picture [4]. Questions such as whether individuals' social network shapes their mobility behaviour or the other way around can be further studied using data we presented here. Would explorers have a different social network structure compared with returners? Does such a difference contribute to their distinct travel behaviours? Representativeness of geotagged social media data as a proxy for human mobility Compared to the one-day travel diary, the Twitter dataset in this study has strengths and weaknesses as a proxy for human mobility. Based on another ongoing study where we compare geotagged tweets with different data sources, the main strengths of geotagged social media data are in long collection duration, a large number of involved individuals, boundary-free spatial coverage, ease of access, low cost, and accurate location information. The main weaknesses are incomplete individual trajectories caused by high sparsity in the time dimension (plus behaviour bias), lack of socio-demographic information (plus population bias), and lack of trip information [24]. Despite high time sparsity, one of the most appealing features of geotagged social media data is the capability of continuous and long-term tracking of individual mobility via their geotagged activities. We demonstrate how that particular feature helps capture the heterogeneity of mobility. A one-day travel diary only captures trips generated within 24 hours for each survey participant, while the Twitter dataset covers on average 3.6 years for each participant. Although Twitter data is extremely sparse, the long-term and continuous tracking compensates for the time sparsity, allowing us to obtain a realistic picture of user's trips on an average day. Given the reported biases in the geotagged social media data, the current study carefully conducts a descriptive analysis in comparison with the travel survey. The behaviour of using social media is complex and multidimensional. For example, more than 20 tweeting features have been used to characterise "how you tweet" including various time-related statistics [63]. If users constantly and regularly tweet during a certain daily time frame or only from a few selected locations, then the locations we capture are skewed to the locations that they tend to visit during that time frame. However, as seen in our study Fig. 2(D), it is not the case that people only geotweet from a few fixed locations. Despite peaks during lunch time and night (Fig. 2(C)), geotagged tweets capture many routine activities (Fig. 2(D)), as seen from the temporal profile of the first and second most visited locations that share some similarities with the "ground truth" in the travel survey. We explore population bias by comparing the geographical distribution of the Twitter users' estimated home location in this study with those from the travel survey. It appears that the top Twitter users are over-represented by residents of big cities. This is consistent with the observation by a previous study [27]. Some of the disadvantages mentioned above can potentially be mitigated. Text mining could be applied to derive location information from the contents of the tweets. One study has proposed such an approach to infer city-level location of tweets, partially mitigating the time sparsity of geotagged tweets [64]. Similarly, data fusion could be promising to obtain better application performance, e.g., activity prediction [65]. In this study, we develop a novel analysis framework to categorise individuals regarding their geotagged activity patterns to reveal population heterogeneity of mobility patterns. Based on the classification results, trip distance and diffusion process in space are presented by distinct group. The major contributions of this study include: (1) Datasets and analysis framework. This study involves two data sources; a household travel survey and geotagged social media data. The geotagged tweets dataset covers a long period (3.6 years on average) without any geographical boundaries. The descriptive analysis of geotagged tweets reveals behaviour and population differences between the two data sources. Our analysis framework provides a coherent picture of the geotagged activity patterns by combining the individual perspective with the aggregate perspective. (2) Four distinct groups of users. We propose two essential aspects to quantify population heterogeneity of human mobility: how actively one explores new locations and how far one travels. A set of features are defined to describe geotagged activity patterns from the perspectives of geographical characteristics and network properties. A hierarchical clustering analysis is applied, and four types of Twitter users are identified: local explorers, local returners, global explorers, and global returners. (3) Population heterogeneity. On the aggregate level, we present the diffusion process in space based on geotagged social media data. It shows good agreements with previous studies, while the key differences between user groups are quantified. One limitation of the current study is the small number of individuals categorised as global returners. Therefore, their behaviour captured in this study is less reliable than the rest of the groups. However, for the sake of completeness reflecting the clustering analyses discussed in Sect. 3.2, we keep all the groups' identities to show how geotagged tweets reveal the heterogeneity of travelling behaviour. Future studies can increase the sample size of this subpopulation to explore the robustness of the results presented here. The other major limitation is that our conclusions from the geotagged activity patterns may not be generalised to the overall population due to the population and behaviour biases introduced by using geotagged tweets [24]. Given the known shortcomings, more systematic research efforts are required to identify and correct for these biases. Our next step is to systematically compare multiple data sources, such as travel surveys, geotagged social media, call detail records, and GPS logs to reach a deeper understanding of the strengths and weakness of each data source [24]. With such understanding, we can further develop mobility models informed by revealed population heterogeneity, leveraging geotagged social media data to estimate more accurate and timely travel patterns and demand. ICT: RQ: Origin-Destination Matrix GC: Geographical Cluster NC: Network Cluster CDR: Call Detail Records cCDF: Complementary Cumulative Distribution Function Noulas A, Scellato S, Lambiotte R, Pontil M, Mascolo C (2012) A tale of many cities: universal patterns in human urban mobility. PLoS ONE 7(5):37027. https://doi.org/10.1371/journal.pone.0037027 Treiber M, Kesting A (2013) Traffic flow dynamics. Traffic flow dynamics: data, models and simulation. Springer, Berlin. https://doi.org/10.1007/978-3-642-32460-4 Balcan D, Colizza V, Gonçalves B, Hu H, Ramasco JJ, Vespignani A (2009) Multiscale mobility networks and the spatial spreading of infectious diseases. Proc Natl Acad Sci 106(51):21484–21489. https://doi.org/10.1073/pnas.0906910106 Kaufmann v, Bergman M, Joye D (2004) Motility: mobility as capital. Int J Urban Regional 28-4:745–756. https://doi.org/10.1111/j.0309-1317.2004.00549.x Chen C, Ma J, Susilo Y, Liu Y, Wang M (2016) The promises of big data and small data for travel behavior (aka human mobility) analysis. Transp Res, Part C, Emerg Technol 68:285–299. https://doi.org/10.1016/j.trc.2016.04.005 Janzen M, Müller K, Axhausen KW (2017) Population synthesis for long-distance travel demand simulations using mobile phone data. In: 6th symposium of the European association for research in transportation (hEART 2017). Wang Z, He SY, Leung Y (2018) Applying mobile phone data to travel behaviour research: a literature review. Travel Behav Soc 11:141–155. https://doi.org/10.1016/j.tbs.2017.02.005 Zhang Z, He Q, Zhu S (2017) Potentials of using social media to infer the longitudinal travel behavior: a sequential model-based clustering method. Transp Res, Part C, Emerg Technol 85:396–414. https://doi.org/10.1016/j.trc.2017.10.005 Yue Y, Lan T, Yeh AGO, Li Q-Q (2014) Zooming into individuals to understand the collective: a review of trajectory-based travel behaviour studies. Travel Behav Soc 1(2):69–78. https://doi.org/10.1016/j.tbs.2013.12.002 Jurdak R, Zhao K, Liu J, AbouJaoude M, Cameron M, Newth D (2015) Understanding human mobility from Twitter. PLoS ONE 10(7):0131469. https://doi.org/10.1371/journal.pone.0131469 Hasan S, Ukkusuri SV (2014) Urban activity pattern classification using topic models from online geo-location data. Transp Res, Part C, Emerg Technol 44:363–381. https://doi.org/10.1016/j.trc.2014.04.003 Gao S, Yang JA, Yan B, Hu Y, Janowicz K, McKenzie G (2014) Detecting origin-destination mobility flows from geotagged tweets in greater Los Angeles area. In: Proceedings of the eighth international conference on geographic information science, pp 1–4 Hasan S, Schneider C, Ukkusuri S, González M (2013) Spatiotemporal patterns of urban human mobility. J Stat Phys 151(1–2):304–318. https://doi.org/10.1007/s10955-012-0645-0 Morstatter F, Pfeffer J, Liu H, Carley KM (2013) Is the sample good enough? Comparing data from Twitter's streaming api with Twitter's firehose. In: Seventh international AAAI conference on weblogs and social media. https://www.aaai.org/ocs/index.php/ICWSM/ICWSM13/paper/view/6071/6379 Stolf Jeuken G (2017) Using big data for human mobility patterns—examining how Twitter data can be used in the study of human movement across space. Master's thesis. http://studentarbeten.chalmers.se/publication/250155-using-big-data-for-human-mobility-patterns-examining-how-twitter-data-can-be-used-in-the-study-of-hu Rashidi TH, Abbasi A, Maghrebi M, Hasan S, Waller TS (2017) Exploring the capacity of social media data for modelling travel behaviour: opportunities and challenges. Transp Res, Part C, Emerg Technol 75:197–211. https://doi.org/10.1016/j.trc.2016.12.008 Liao Y, Yeh S (2018) Predictability in human mobility based on geographical-boundary-free and long-time social media data. In: 2018 21st international conference on intelligent transportation systems (ITSC). IEEE Press, New York, pp 2068–2073. https://doi.org/10.1109/ITSC.2018.8569770 Malik MM, Lamba H, Nakos C, Pfeffer J (2015) Population bias in geotagged tweets. In: Ninth international AAAI conference on web and social media, pp 18–27. https://www.aaai.org/ocs/index.php/ICWSM/ICWSM15/paper/viewPaper/10662 Ruths D, Pfeffer J (2014) Social media for large studies of behavior. Science 346(6213):1063–1064. https://doi.org/10.1126/science.346.6213.1063 Tasse D, Liu Z, Sciuto A, Hong JI (2017) State of the geotags: motivations and recent changes. In: Eleventh international AAAI conference on weblogs and social media, pp 250–259. https://www.aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/viewPaper/15588 Wesolowski A, Eagle N, Noor AM, Snow RW, Buckee CO (2013) The impact of biases in mobile phone ownership on estimates of human mobility. J R Soc Interface 10(81):20120986. https://doi.org/10.1098/rsif.2007.1218 Lenormand M, Picornell M, Cantú-Ros OG, Tugores A, Louail T, Herranz R, Barthelemy M, Frias-Martinez E, Ramasco JJ (2014) Cross-checking different sources of mobility information. PLoS ONE 9(8):105184. https://doi.org/10.1371/journal.pone.0105184 Wang Q, Phillips NE, Small ML, Sampson RJ (2018) Urban mobility and neighborhood isolation in America's 50 largest cities. Proc Natl Acad Sci 115(30):7735–7740. https://doi.org/10.1073/pnas.1802537115 Liao Y, Yeh S (2020) Using geotagged tweets to assess human mobility: a comparison with travel survey and GPS log data (under review). Transp Res, Part C, Emerg Technol Hasnat MM, Hasan S (2018) Identifying tourists and analyzing spatial patterns of their destinations from location-based social media data. Transp Res, Part C, Emerg Technol 96:38–54. https://doi.org/10.1016/j.trc.2018.09.006 Lenormand M, Gonçalves B, Tugores A, Ramasco JJ (2015) Human diffusion and city influence. J R Soc Interface 12(109):20150473. https://doi.org/10.1098/rsif.2015.0473 Mislove A, Lehmann S, Ahn Y-Y, Onnela J-P, Rosenquist JN (2011) Understanding the demographics of Twitter users. In: Fifth international AAAI conference on weblogs and social media, pp 554–557. https://www.aaai.org/ocs/index.php/ICWSM/ICWSM11/paper/view/2816/3234 Gonzalez MC, Hidalgo CA, Barabasi A-L (2008) Understanding individual human mobility patterns. Nature 453(7196):779–782. https://doi.org/10.1038/nature07850 Song C, Koren T, Wang P, Barabási A-L (2010) Modelling the scaling properties of human mobility. Nat Phys 6(10):818–823. https://doi.org/10.1038/nphys1760 Coffey C, Pozdnoukhov A (2013) Temporal decomposition and semantic enrichment of mobility flows. In: Proceedings of the 6th ACM SIGSPATIAL international workshop on location-based social networks. LBSN'13. ACM, New York, pp 34–43. https://doi.org/10.1145/2536689.2536806 Chang J, Sun E (2011) Location3: how users share and respond to location-based data on social networking sites. In: Proceedings of the fifth international AAAI conference on weblogs and social media, pp 74–80 Pianese F, An X, Kawsar F, Ishizuka H (2013) Discovering and predicting user routines by differential analysis of social network traces. In: 2013 IEEE 14th international symposium and workshops on a World of wireless, mobile and multimedia networks (WoWMoM). IEEE Press, New York, pp 1–9. https://doi.org/10.1109/WoWMoM.2013.6583383 Hasan S, Ukkusuri SV (2015) Location contexts of user check-ins to model urban geo life-style patterns. PLoS ONE 10(5):0124819. https://doi.org/10.1371/journal.pone.0124819 Yang D, Zhang D, Zheng VW, Yu Z (2015) Modeling user activity preference by leveraging user spatial temporal characteristics in lbsns. IEEE Trans Syst Man Cybern Syst 45(1):129–142. https://doi.org/10.1109/TSMC.2014.2327053 Jin P, Cebelak M, Yang F, Zhang J, Walton C, Ran B (2014) Location-based social networking data: exploration into use of doubly constrained gravity model for origin-destination estimation. Transp Res Rec 2430:72–82. https://doi.org/10.3141/2430-08 Lee JH, Gao S, Goulias KG (2015) Can Twitter data be used to validate travel demand models. In: 14th international conference on travel behaviour research. Lee JH, Davis AW, Yoon SY, Goulias KG (2016) Activity space estimation with longitudinal observations of social media data. Transportation 43(6):955–977. https://doi.org/10.1007/s11116-016-9719-1 Keuschnigg M, Mutgan S, Hedström P (2019) Urban scaling and the regional divide. Sci Adv 5(1):0042. https://doi.org/10.1126/sciadv.aav0042 Kantardzic M (2011) Data mining: concepts, models, methods, and algorithms. Wiley, Hoboken. https://doi.org/10.1109/9780470544341 The Tweepy project developers: Tweepy: v3.5.0 (2017). http://tweepy.readthedocs.io/en/v3.5.0/ Barabási A-L (2005) The origin of bursts and heavy tails in human dynamics. Nature 435(7039):207–211. https://doi.org/10.1038/nature03459 Official Statistics of Sweden: Swedish National Travel Survey (RVU Sweden) 2011–2016. (2016). https://www.trafa.se/en/travel-survey/travel-survey/ Markovich N (2008) Nonparametric analysis of univariate heavy-tailed data: research and practice, vol 753. Wiley, Chichester Barabási A-L et al. (2016) Network science. Cambridge University Press, Cambridge Song C, Qu Z, Blumm N, Barabási A-L (2010) Limits of predictability in human mobility. Science 327(5968):1018–1021. https://doi.org/10.1126/science.1177170 Deza MM, Deza E (2009) Encyclopedia of distances. Springer, Berlin. https://doi.org/10.1007/978-3-642-00234-2 Ward JH Jr (1963) Hierarchical grouping to optimize an objective function. J Am Stat Assoc 58(301):236–244. https://doi.org/10.1080/01621459.1963.10500845 Rousseeuw PJ (1987) Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Comput Appl Math 20:53–65. https://doi.org/10.1016/0377-0427(87)90125-7 Ester M, Kriegel H-P, Sander J, Xu X et al. (1996) A density-based algorithm for discovering clusters in large spatial databases with noise. In: Kdd, vol 96. AAAI Press, Palo Alto, pp 226–231. Statistics Sweden: Population of Sweden in 2016, by county (2016). https://www.statista.com/statistics/526617/sweden-population-density-by-county/ Golledge RG, Stimson RJ (1997) Spatial behavior: a geographic perspective. Guilford Press, New York Brockmann D, Hufnagel L, Geisel T (2006) The scaling laws of human travel. Nature 439(7075):462. https://doi.org/10.1038/nature04292 Scherrer L, Tomko M, Ranacher P, Weibel R (2018) Travelers or locals? Identifying meaningful sub-populations from human movement data in the absence of ground truth. EPJ Data Sci 7(1):19. https://doi.org/10.1140/epjds/s13688-018-0147-7 Pappalardo L, Simini F, Rinzivillo S, Pedreschi D, Giannotti F, Barabási A-L (2015) Returners and explorers dichotomy in human mobility. Nat Commun 6. https://doi.org/10.1038/ncomms9166 Anda C (2018) A time-space model of disaggregated urban mobility from aggregated mobile phone data. In: 15th international conference on travel behavior research (IATBR 2018). Future Cities Laboratory (FCL), Zurich. https://doi.org/10.3929/ethz-b-000300714 Xu Z, Glass K, Lau CL, Geard N, Graves P, Clements A (2017) A synthetic population for modelling the dynamics of infectious disease transmission in American Samoa. Sci Rep 7(1):16725. https://doi.org/10.1038/s41598-017-17093-8 Merler S, Ajelli M (2010) Human mobility and population heterogeneity in the spread of an epidemic. Proc Comput Sci 1(1):2237–2244 Dobra A, Bärnighausen T, Vandormael A, Tanser F (2019) A method for statistical analysis of repeated residential movements to link human mobility and hiv acquisition. PLoS ONE 14(6):0217284 Vannoy SA, Palvia P (2010) The social influence model of technology adoption. Commun ACM 53(6):149–153 Fiorio L, Abel G, Cai J, Zagheni E, Weber I, Vinué G (2017) Using Twitter data to estimate the relationship between short-term mobility and long-term migration. In: Proceedings of the 2017 ACM on web science conference. ACM, New York, pp 103–110 Pelechrinis K, Krishnamurthy P (2016) Socio-spatial affiliation networks. Comput Commun 73:251–262 Phithakkitnukoon S, Smoreda Z, Olivier P (2012) Socio-geography of human mobility: a study using longitudinal mobile phone data. PLoS ONE 7(6):39253 Pennacchiotti M, Popescu A-M (2011) A machine learning approach to Twitter user classification. In: Fifth international AAAI conference on weblogs and social media. https://www.aaai.org/ocs/index.php/ICWSM/ICWSM11/paper/viewPaper/2886 Cheng Z, Caverlee J, Lee K (2010) You are where you tweet: a content-based approach to geo-locating Twitter users. In: Proceedings of the 19th ACM international conference on information and knowledge management, CIKM'10. ACM, New York, pp 759–768. https://doi.org/10.1145/1871437.1871535 Zhu Z, Blanke U, Tröster G (2014) Inferring travel purpose from crowd-augmented human mobility data. In: Proceedings of the first international conference on IoT in urban space. URB-IOT '14. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), ICST, Brussels, pp 44–49. https://doi.org/10.4108/icst.urb-iot.2014.257173 The authors would like to express their sincere gratitude to the two anonymous reviewers whose comments have greatly improved this manuscript. The data that support the findings of this study are available from Twitter (https://twitter.com). The dataset generated and analysed for this study is not publicly available due to privacy protection of the subjects in our dataset, which can be potentially used to identify individuals. This research is funded by the Swedish Research Council Formas (Project Number 2016-1326). Department of Space, Earth and Environment, Division of Physical Resource Theory, Chalmers University of Technology, Gothenburg, Sweden Yuan Liao & Sonia Yeh School of Engineering Sciences in Chemistry, Biotechnology and Health, KTH Royal Institute of Technology, Stockholm, Sweden Gustavo S. Jeuken Yuan Liao Sonia Yeh YL, SY and GJ designed the study. YL analyzed the data. YL and SY wrote the paper. All authors edited and approved the final version of this manuscript. Correspondence to Yuan Liao. Liao, Y., Yeh, S. & Jeuken, G.S. From individual to collective behaviours: exploring population heterogeneity of human mobility based on social media data. EPJ Data Sci. 8, 34 (2019). https://doi.org/10.1140/epjds/s13688-019-0212-x DOI: https://doi.org/10.1140/epjds/s13688-019-0212-x Geotagged activity patterns Individual mobility
CommonCrawl
The total Surface Area = 2r (h + r) = 2x 22/7 x 14 x (15 + 14) Total Surface Area = 2 x 22 x 2 x 29. CGE 7b, 5g : 7 Summative Assessment Find the area of the surface G cut from the hemisphere x^2+y^2+z^2=4, z \geq 0, by the planes z=2 and z=6; also find the surface area on the right circular cylinder x^2+y^2=4 A right circular cylinder has surface area 200 cm^2. lateral SA = 2rh. And then we're going to multiply it by its height. To find the surface area of this cylinder, we must find the area of its base ends first: A1 = September 7, 2021 Craig Barton. The Cylinder and Prism are the same except the base is a circle instead of a polygon. The volume of a cylinder is the density of the cylinder which signifies the amount of material it can carry or how much amount of any material can be immersed in it. Solution: Curved surface area of a cylinder = 2rh unit 2. Surface area of a cylinder formula . 678.24 ft3. Now, the area of the rectangle = length breadth. A cylinder is a three-dimensional geometric figure that has two circular bases that are parallel to each other. The surface area is the area of the top and bottom circles (which are the same), and the area of the rectangle (label that wraps around the can). What does the letter represent on your formula sheet? o The formula for the surface area of a cylinder is : _____ Find the surface area of each cylinder. Name the word & describe it. Unit all The surface area of the cylinder; Total = 138.23 Lateral = 113.1 Base How to find the area of the total surface of a cylinder? 0200 m long Answers for volume problems should always be in cubic units Surface charge density is a measure of how much electric charge is accumulated over a surface A sphere with And, curved 4. The lateral surface area of a cylinder can be defined as the surface area of all of the sides of the cylinder excluding its base and top is calculated using Lateral Surface Area = 2* pi * Radius * Height.To calculate Lateral Surface Area of a Cylinder, you need Radius (r) & Height (h).With our tool, you need to enter the respective value for Radius & Height and hit the calculate button. 113.04 ft3. In these system of coordinates the surface element In terms of formula, it's something like this: 2rh + r 2 + r 2.Make spectacular headway in calculating the surface area of cylinders with this set of printable worksheets for grade 6, grade 7, and grade 8! A Cylinder has a two circle ends and the third is the rectangle which is curved around to make the sides. mD = mean density of the material in the Hint: It seems that the r is the radial coordinate in cylindrical coordinates: x = cos y = sin z = z. Round pi or off to 3.14. Find its height. Find the surface area of the cylinder to the Remember area is 2 dimensional so I will put whatever units it is squared. Volume: r2h. 2. See also: Volume of a cylinder. Answers. Subjects. 4. The surface area of a capsule can be determined by combining the surface area equations for a sphere and the lateral surface area of a cylinder. The ends are joined by a curved surface. Recall that the formula for the volume of a cylinder is: V = r 2 h. or in terms of the diameter, it is: V = d 2 4 h. Therefore, we can use this formula and solve for the diameter if we have the values for the volume and the height. Hence the curved surface area of the cylinder is 200.96 sq. 243.74 in. Find the Perimeter. The formula to calculate the total surface area of a cylinder is expressed as, total surface area of cylinder = 2r (r + h). It forms a rectangle 2 r units long and h units high. 216 ft3. Cylindrical Tank The surface area of a closed cylinder can be calculated by summing the total areas of its base and lateral surface: base SA = 2r2. 1. 3. where 2r 2 is the area of the two bases and 2rh is the area of the lateral surface. For surface area, you would calculate: 2(3)(6) + 2(3)2, which will give you 54. A cylinder has a radius (r) and a height (h) (see picture below). The total surface area of a cylinder is equal to the sum of the areas of all its faces. The formula for the surface area of a cylinder is: A = 2rh + 2r2 A = 2 r h + 2 r 2. We can find the value of the diameter starting from the volume. Solution: Given, the base radius (r) of the cylinder = 7 cm. So the area of the cylinder will be: 2r + 2rh, or. Or 2 pi times the radius. The total surface area of a solid can also be S = 2r 2 + 2rh. Similarly, the total surface area of a cylinder or cone is the combined area of its lateral surface and its base(s). The cylinder has a height of, 18, and a diameter of, 17. False. Solved Example. Cylinder Practice Problems. In cylindrical coordinates, with basis vectors 1 r, 1 , 1 z, the normal to the cylinder is simply 1 r. Your expression already is in Cartesian coordinates: you give an x component, a y component, and a z component. To find the surface area of a cylinder add the surface area of each end plus the surface area of the side. 1005.3. NYCarter06. The figure is not drawn to scale. Homework - Surface area of rectangular prisms and cylinders Day 3 1. (. As light from a material with a low density, such as air, enters a material with a high density, such as Let r be smaller than a, so that the surface only encloses a part of the charge Unit of Surface Area. This shape is similar to a can. A sphere with radius r r r has a volume of 4 3 r 3 \frac{4}{3} \pi r^3 3 4 r 3 and a surface area of 4 r 2 4 \pi r^2 4 r 2 The electric field value is related to the enclosed charge inside the ), the normal is. Prisms 2. Search: Surface Charge Density Of A Cylinder. A cylinder has two congruent bases, which makes it easy to calculate its surface area: you simply find the area of one base and double that value; then you add the cylinder's lateral area (or lateral "rectangle"). So what we're going to do here is figure out the surface area of the top of this cylinder, or the top of the soda can. Use formulas to find the lateral area and surface area of the given prism. A cylinder with its bases and lateral "rectangle." Describe, compare, and contrast the words lateral area, base area, and surface area . A Cylinder Head Temperature gauge (CHT) measures the cylinder head temperature of an engine.Commonly used on air-cooled engines, the head temperature gauge displays the work that the engine is performing more quickly than an oil or water temperature gauge. A cylinder is a simple geometric shape with two equally-sized and parallel circular bases. 2. Find the height of the cylinder. Use 3.14 as . Surface area = (2**10)*35 + 2*(*10 2)= 2827 cm 2 This can also be written as 0.2827 m 2 (2827 multiplied by 10-4 to convert from cm 2 to m 2).. Because area is two dimensional. Add those two parts together and you have the formula for the surface area of a cylinder. total SA = 2r(r + h) The surface area of a cylinder can be found by breaking it down into three parts: The two circles that make up the ends of the cylinder. Get 24/7 Surface Area of a Cylinder assistance at TutorEye. It's The mathematical definition of surface area in the presence of curved surfaces is considerably more involved than the definition of arc length of one-dimensional curves, or of the surface area for polyhedra (i.e., objects with flat polygonal faces), for which the surface area The formula for surface area of a cylinder is SA = 2*r 2 + 2*rh, where r equals the radius of the circular base and h equals the height of the cylinder. The meter can be digital Problem: Consider a cylinder with a radius of 3 meters and a height of 6 meters. Surface Area The sum of the heights and the radius of a solid cylinder is 35 cm and its total surface area is 3080 cm 2, find the volume of the cylinder. Online calculator When this happens, piezoelectricity is produced It is calculated as the charge per unit surface area Use Gauss's law to obtain expression for the electric field at (1)a point in the space between the cylinders, (2)outside the larger cylinder Concentric with the cylinder is a cylindrical Surface area of a sphere. If its height is 28 cm and the volume of material in it is 704 cm 3;find its external curved surface area. The area of the circles are simple: 2*pi*r*r, where 2 denotes the no. Now we can take this and use it in the surface area formula. 8m16, 8m18, 8m20, 8m24, 8m33, 8m39, 8m62 . Textbook Closed cylinder. A cylinder has two congruent bases, which makes it easy to calculate its surface area: you simply find the area of one base and double that value; then you add the cylinder's 24. This type of activity is known as Practice. The side of the cylinder, which when "unrolled" is a rectangle. Sorted by: 1. 6 Its All in the Cylinder : Solve problems abstractly and in context relating the surface area to the volume of cylindrical shapes. Then enter the value of the Length and choose the unit of measurement from the drop-down menu. This geometry video tutorial explains how to find the volume of a cylinder as well as the surface area of a cylinder in terms of pi. Cylinder Volume and Surface Area Equation and Calculator. Total surface area of a closed cylinder is: A = L + T + B = 2 rh + 2 ( r 2) = 2 r (h+r) ** The area A very long insulating cylinder of charge of radius 2 from the cylinder; (c) the induced surface-charge density, and plot it as a function of angle for R/b = 2, 4 in units of squared. To find the area of the base, just plug the radius, 3 centimeter (1.2 in), into the equation for finding the area of a circle: A = r 2. A = r 2A = x 3 2A = x 9 = 28.26 cm 2 \displaystyle \pi\approx3.14 3.14) 54 ft3. In other words, evaluate r(200). What is the formula for the surface area of a cylinder? The Cylinder and Prism are the same except the base is a circle instead of a polygon. units. Online calculator When this happens, piezoelectricity is produced It is calculated as the charge per unit surface area Use The units of surface area will be some unit of length squared: in 2, cm 2, m 2, etc. A cylinder's volume is r h, and its surface area is 2 r h + 2 r. 4. The surface area of solids is measured in square units. Cylinders volume is given by the formula, r 2 h, where r is the radius of the circular base and h is the height of the cylinder. Cylinder Example: Find the volume, curved surface and total surface area of a cylinder with the given radius 5 and height 10 Step 1: Find the volume. This page examines the properties of a right circular cylinder. To calculate the surface area of a cylinder, you can use the standard geometric formula based on the PI function together with the exponent operator (^). Thus, 2 2 2 2 ()2 h h m L L m g E D= = 2 * ()2 h m g E D = It is significant that the 2D density of states does not depend on A cylindrical shipping container is shaped just like a normal, fixed, exact, complete, whole, entire, cylinder. Slice this piece of the can from bottom to top and roll it out flat. What is the surface area of the cylinder? As the engine works at high speed or uphill, head temperature will increase quickly. The area of a rectangle is the length times the width so the area of this Common Area Units. The Solution: Question 22. =. Find the surface area of the prism below. Ans: The surface area of a cylinder formula is, Total surface area of a cylinder \({\rm{(TSA)}} = 2\pi r(h + This geometry video tutorial explains how to find the volume of a cylinder as well as the surface area of a cylinder in terms of pi. To find the surface area we need the surface area formula, which is as follows: The lateral surface is perpendicular to the base and the base is formed by a circle. lateral area Al =. Chapter 9 Practice Test Surface Area Multiple Choice Identify the choice that best completes the statement or answers the question. Unless you want to scale the normal with the radius of the cylinder (and why would you want that? Gravity. square units Area is measured in square units such as square inches, square feet or square meters. volume V =. Author: Andrea Kite. 8m16, 8m18, 8m20, 8m24, 8m33, 8m39, 8m62 . SA = 2**r 2 + 2**r*h. For example, find the surface area of a cylinder with a Note that the surface area of the bases of the cylinder is not included since it does not comprise part of the surface area of a capsule. is, of course, the well-known mathematical Calculate volume and surface area of a frustum of right circular cone and surface to volume ratio of a frustum of right circular cone. The unit of cross sectional area will depend on the length unit used for radius measurement. Each face of a cube has 4 edges and totally there are 12 edges. ~852 m2; 916 m2 ~859 m2; 891 m2 ~820 m2; 916 m2 ~820 m2; 884 m2 5. Cylindrical Tank The surface area of a closed cylinder can be calculated by summing the total areas of its base and lateral surface: base SA = 2r2. Total Surface Area of a Cylinder. Practice Problem 2: Silo has a cylindrical shape. The total surface area of a cylinder means the sum of curved surface area and the area of two circular bases. Find the volume of a cylinder if the length of its height is 6 ft and the diameter of the base is 6 ft long. Therefore, the total surface 3. top(8.94m) side(41m) bottom left corner(4m) bottom(8m). The surface area of a cylinder is equal to the area occupied by its surface in three-dimensional space. 3. The surface area of the cylinder, A, is A=2r2+2rh (it's two circles for the top and bottom plus a rolled up rectangle for the side). Terms in this set (15) The formula for finding the surface area of a cylinder is SA = r2 + rh. A rectangles area is a product of its sides, so the formula is: The lateral area = (2 x x r) Pyramids 3. Find the lateral area of a silo $20$ meters tall What is the total surface area of it in feet squared? Formula for Surface Areas of Cylinders; Total Surface Area of a Cylinder: TSA = 2 r(r+h) Curved Surface Area of a Cylinder: CSA = 2 r h The Cylinder Area Formula The picture below Curved surface area of a 3. A square centimeter (cm2) is a square whose sides all measure 1 centimeter. 12in , 17in ~ 905 in2 ~ 1,282 in2 ~ 2,187 in2 ~ 7,691 in2 my answer was C 6)Find the surface area of the regular pyramid shown to the nearest whole number. Surface area of a cylinder. Here, r = 5 cm and h = 10 cm. To find the radius, r, of a cylinder from its surface area A, you must also know the cylinder's height, h:. The bases are right triangles. Unit all The area of the surface of the truncated cone; A base = 78.54 A top = 28.27 A lateral = 280.99 A total = 387.81; Please read the guidance notes here, where you will find useful information for running these types of activities with your students. These equations will give you correct answers if you keep the units straight. Enter height h and 1 value. 2. Therefore the surface area of this cylinder is : A = 226.08 cm. 2 Answers. Substitute the height h into the surface area of a cylinder equation, A = 2r + 2rh. The formula to calculate the total surface area of a cylinder is expressed as, total surface area of cylinder = 2r(r + h). A) 114 cm B) 134 cm C) 586 cm D) 94 cm ____ 2. where: Wt = weight of the cylinder. Learn how to use these formulas to solve an example problem. The total surface area with radius r, and height h is equal to the sum of the curved area and circular areas of the cylinder. The curved surface area of a right circular cylinder of base radius 7 cm is 110 cm2. The formula for the weight of a cylinder is: Wt= [r 2 h]mD. 25. Curved surface area (CSA) of a cylinder of base radius r and height h = 2 Created by. The side of the cylinder, which when "unrolled" is a Find the Area of the figure. Diameter of a cylinder using volume. So for example, let's say you had a cylinder that had a radius of 3 cm and a height of 6 cm. well as the surface area of cylindrical shapes abstractly and in context. What does the letter represent on your formula sheet? 2r is the circumference of the circle and h is the height. Calculate the top and bottom surface area of a cylinder (2 circles ): T = B = r 2. We can measure the area in square units such as square centimetres, square inches, square feet, etc. The surface area is the area of the top and bottom circles (which are the same), and the area of the rectangle (label that wraps around the can). A) 10 cm B) 31.4 cm C) 125.6 cm D) 62.8 cm ____ 3. 1. A right circular cylinder is a solid formed by two parallel bases and a lateral surface. Round to decimal places. Therefore, the formula to calculate surface area to volume ratio is: SA/VOL = surface area (x 2) / volume (x 3) SA/VOL = x-1, where x is the unit of measurement. A cube has 6 identical square faces and hence it is also called as a hexahedron. 2. For volume, you would calculate: Here are the formulas you'll need: Surface area: 2rh +2r2. SA=2r 2 +2rh where h=height and = 3.14. surface area A =. And that'll give us a volume. Example 2 : In this example, the cylinder has a height of 7 cm and a circle radius of 5 cm. Our experts will help you understand the concepts, formulas to apply and solve homework problems for achieving better grades in class. and more. The intake manifold can have a strong effect on cylinder-to-cylinder variations in charge air swirl and flow characteristics (3 pts) What is the surface charge density b Formula So here are the rules. The total surface area of a cylinder is equal to the sum of areas of all its faces. The total surface area with radius r, and height h is equal to the sum of the curved area and circular areas of the cylinder. TSA = 2 r h + 2r2= 2r (h + r) Square units. The formula for cylinder curved surface area is given as, Curved Surface Area (CSA) = 2rh square units (Note: h is the height, r is the radius, and the value of is 22/7 or 3.14 2 r 2 + 2 r h = 2 r ( r + h ) {\displaystyle 2\pi r^ {2}+2\pi rh=2\pi r\left (r+h\right)} r = radius of the circular base, h = height of the cylinder. 5) Find the surface area of the cylinder to the nearest whole number. Each end is a circle so the surface area of each end is * r 2, where r is the radius Round to the nearest tenth if The figure is not drawn to scale. CGE 5b, 7b . So that is how you find the surface area. Total Surface Area of a Cylinder. The surface area of a closed cylinder can be calculated by summing the total areas of its base and lateral surface: base SA = 2r 2 lateral SA = 2rh total SA = 2r(r + h) where r is the There is more information in the Introduction to Mobius unit. The total surface area of a cylinder can be defined as the surface area of all the surfaces of a cylinder and is represented as SATotal = 2*pi*r* (h+r) or Total Surface Area = 2*pi*Radius* (Height+Radius). The surface area of a cylinder can be determined by using the following formula: where r is the radius of the base circle and h is the height of the cylinder. Name the word & describe it. Search: Surface Charge Density Of A Cylinder. Find the Circumference of the circle. Find the surface area, to the nearest tenth of a square foot, of this container assuming it has a closed top and bottom. For instance, if the dimensions are given in \(m,\) then the surface area will be in \({{\rm{m}}^2},\) which is the standard unit of surface area in the International System of Units (SI). radius (r): Q.4. Surface Area of a cube is the total area of the outside surfaces of the cube and is given by A= 6a 2, where a is the edge. h = height of cylinder. Describe, compare, and contrast the words lateral area, base area, and surface area . Total Surface Area of Cylinder = 2r ( r + h ) r is the radius of the cylinder and h is its height. now calculate the surface area and volume of the cylinder by the help of the above formula: #calculate the surface area of cylinder s_area=2*math.pi*pow(r,2)*h #calculate the volume of cylinder volume=math.pi*pow(r,2)*h now combine the whole part of the program in a In this unit, we will learn to find the surface area and volume of the following three-dimensional solids: 1. It is measured in terms of square unit. In the figure above, drag the orange dot to the left as far as it will go. The general formula for the total surface area of a cylinder is T. S. A. =2rh+2r2 . Given this, what is TSA of cylinder? Total Surface Area of Cylinder The total surface area of a cylinder is equal to the sum of areas of all its faces. Formula of Surface Area of Cylinder. Round your answer to the nearest whole number. Search: Surface Charge Density Of A Cylinder. base area Ab =. Question 1: Find the curved surface area of a cylinder given its height is 10 cm and radius 5 cm. r = radius of cylinder. Find the surface area of the cylinder below. The surface area of a cylinder is nothing but the sum of the areas of its curved surface and two circular bases. Study with Quizlet and memorize flashcards terms like cylinder, surface area, The formula for finding the surface area of a cylinder is 2(pi)rh +(pi)r^2 . Cylinders 4. It gives the proportion of surface area per unit volume of the object (e.g., sphere, cylinder, etc.). The total surface area of a solid cylinder is 616 cm 2. Home. Part 1 Part 1 of 3: Calculating the Surface Area of the Circles (2 x ( x r 2 ))Visualize the top and bottom of a cylinder. A can of soup is the shape of a cylinder. Find the radius of your cylinder. The radius is the distance from the center of a circle to the outer edge of the circle.Calculate the surface area of the top circle. Do it again for the circle on the other side. A total = A lateral +2A base = 2r(r+h) How to find the surface to volume ratio of a cylinder? This total surface area includes the area of the 2 bases (2r 2) and the total SA = 2r(r + h) where r is the radius and h is the height. The base area = 2 * * r Establishing the lateral surface area is even more straightforward than that. To begin calculating the surface area of a cylinder, we know that the equation says to take 2 The surface area of a solid object is a measure of the total area that the surface of the object occupies. The surface area of a cylinder can be found by breaking it down into three parts: The two circles that make up the ends of the cylinder. A convenient Gaussian surface is a cylinder of cross section A and length 2r Volume charge density is located in free space as v = 2e1000r nC/m3 for 0 a, on which one 4. unit conversion calculator to The charge density will be the measure of electric charge per unit area of a surface, or per unit volume of a body or field 4 Consider an infinitely long cylinder with charge densityr, dielectric constant e 0 and radius r 0 4 Consider an infinitely long cylinder with charge densityr, dielectric constant e 0 and radius r 0. This total surface area includes the area of the 2 bases (60 units of distance forward for each unit of descent) in the best cases, but with 30:1 being considered good performance for general recreational use. This formula comes from the equation of a cylinders surface area which is: A = A(lateral) + 2 * A(base) To use this formula, you must first find the base and lateral area using the following equations: unit for surface area of a cylinder Last Australian To Win The Australian Open Best Hoodies American Giant Hori Switch Adventure Pack Reborn Doll Breathing Tumor On Brain Stem Symptoms California Seaweed Farming Le Marche, Italy Weather Cataracts In Dogs: Symptoms Kolam Renang Karawang Magellan Eap Provider Login unit for surface area of a cylinder 2022
CommonCrawl
A reverse Hardy–Hilbert-type integral inequality involving one derivative function Qian Chen1 & Bicheng Yang2 In this article, by using weight functions, the idea of introducing parameters, the reverse extended Hardy–Hilbert integral inequality and the techniques of real analysis, a reverse Hardy–Hilbert-type integral inequality involving one derivative function and the beta function is obtained. The equivalent statements of the best possible constant factor related to several parameters are considered. The equivalent form, the cases of non-homogeneous kernel and some particular inequalities are also presented. Assuming that \(0 < \sum_{m = 1}^{\infty } a_{m}^{2} < \infty \) and \(0 < \sum_{n = 1}^{\infty } b_{n}^{2} < \infty \), we have the following Hilbert inequality with the best possible constant factor π (cf. [1], Theorem 315): $$ \sum_{m = 1}^{\infty } \sum _{n = 1}^{\infty } \frac{a_{m}b_{n}}{m + n} < \pi \Biggl(\sum _{m = 1}^{\infty } a_{m}^{2} \sum_{n = 1}^{\infty } b_{n}^{2} \Biggr)^{1/2}. $$ If \(0 < \int _{0}^{\infty } f^{2}(x)\,dx < \infty \) and \(0 < \int _{0}^{\infty } g^{2}(y) \,dy < \infty \), then we still have the integral analogue of (1) as follows (cf. [1], Theorem 316): $$ \int _{0}^{\infty } \int _{0}^{\infty } \frac{f(x)g(y)}{x + y}\,dx\,dy < \pi \biggl( \int _{0}^{\infty } f^{2}(x)\,dx \int _{0}^{\infty } g^{2}(y) \,dy \biggr)^{1/2}, $$ where the constant factor π is the best possible. Inequalities (1) and (2) with their extensions play an important role in analysis and its applications (cf. [2–13]). The following half-discrete Hilbert-type inequality was presented in 1934 (cf. [1], Theorem 351): If \(K(x)\) (\(x > 0\)) is a non-negative decreasing function, \(p > 1,\frac{1}{p} + \frac{1}{q} = 1,0 < \phi (s) = \int _{0}^{\infty } K(x)x^{s - 1} \,dx < \infty \), \(f(x) \ge 0, 0 < \int _{0}^{\infty } f^{p} (x)\,dx < \infty \), then $$ \sum_{n = 1}^{\infty } n^{p - 2}\biggl( \int _{0}^{\infty } K (nx)f(x)\,dx\biggr)^{p} < \phi ^{p}\biggl(\frac{1}{q}\biggr) \int _{0}^{\infty } f^{p} (x)\,dx. $$ In recent years, some new extensions and reverses of (3) were presented by [14–19]. In 2006, by using Euler–Maclaurin summation formula, Krnic et al. [20] gave an extension of (1) with the kernel \(\frac{1}{(m + n)^{\lambda }}\ (0 < \lambda \le 14)\). In 2019–2020, using the results of [20], A diyasuren et al. [21] considered an extension of (1) involving the partial sums, and Mo et al. [22] gave an extension of (2) involving the upper limit functions. In 2016–2017, by applying the weight functions, Hong et al. [23, 24] considered some equivalent statements of the extensions of (1) and (2) with several parameters. For some similar work, see [25–28]. In this paper, following [21, 23], by the use of weight functions, the idea of introducing parameters, the reverse extension of (1) and the technique of real analysis, a reverse Hardy–Hilbert-type integral inequality with the kernel \(\frac{1}{(x + y)^{\lambda + 1}}(\lambda > 0)\) involving one derivative function and the beta function is given. The equivalent statements of the best possible constant factor related to several parameters are considered. The equivalent form, the cases of non-homogeneous kernel and a few particular inequalities are obtained. Some lemmas In what follows, we assume that \(0 < p < 1,\frac{1}{p} + \frac{1}{q} = 1,\lambda > 0,\lambda _{i} \in (0,\lambda )\ (i = 1,2),a: = \lambda - \lambda {}_{1} - \lambda _{2}\), \(f(x)\) is a non-negative measurable function in \(R_{ +} = (0,\infty )\), and \(g(y)\) is a non-negative increasing differentiable function unless at finite points in \(R_{ +} \), with \(g(y) = o(1)\ (y \to 0^{ +} )\), \(g(y) = o(e^{ty})\ (t > 0;y \to \infty )\) satisfying $$ 0 < \int _{0}^{\infty } x^{p(1 - \lambda _{1}) - a - 1} f^{p}(x) \,dx < \infty \quad \text{and}\quad 0 < \int _{0}^{\infty } y^{q(1 - \lambda _{2}) - a - 1} g^{\prime \,q}(y) \,dy < \infty. $$ By the definition of the gamma function, for \(\lambda ,x,y > 0\), the following expression holds (cf. [29]): $$ \frac{1}{(x + y)^{\lambda }} = \frac{1}{\Gamma (\lambda )} \int _{0}^{\infty } t^{\lambda - 1} e^{ - (x + y)t} \,dt, $$ where the gamma function is defined by $$ \Gamma (\alpha ): = \int _{0}^{\infty } e^{ - t} t^{\alpha - 1}\,dt\quad ( \alpha > 0), $$ satisfying $$ \Gamma (\alpha + 1) = \alpha \Gamma (\alpha ) \quad(\alpha > 0). $$ Lemma 1 For \(t > 0\), we have the following expression: $$ \int _{0}^{\infty } e^{ - ty} g(y)\,dy = \frac{1}{t} \int _{0}^{\infty } e^{ - ty} g'(y) \,dy. $$ Since \(g(y) = o(1)\ (y \to 0^{ +} )\), we find $$\begin{aligned} \int _{0}^{\infty } e^{ - ty} g'(y) \,dy &= \int _{0}^{\infty } e^{ - ty} \,dg(y) \\ &= e^{ - ty}g(y)|_{0}^{\infty } - \int _{0}^{\infty } g (y)\,de^{ - ty} = \lim _{y \to \infty } \frac{g(y)}{e^{ty}} + t \int _{0}^{\infty } e^{ - ty} g(y)\,dy. \end{aligned}$$ In view of \(g(y) = o(e^{ty})\ (t > 0;y \to \infty )\), we have \(\lim_{y \to \infty } \frac{g(y)}{e^{ty}} = 0\), and then $$ t \int _{0}^{\infty } e^{ - ty} g(y)\,dy = \int _{0}^{\infty } e^{ - ty} g'(y) \,dy, $$ namely, Eq. (5) follows. The lemma is proved. □ Define the following weight functions: $$\begin{aligned} &\varpi (\lambda _{2},x): = x^{\lambda - \lambda _{2}} \int _{0}^{\infty } \frac{t^{\lambda _{2} - 1}}{(x + t)^{\lambda }} \,dt\quad (x \in \mathrm{R}_{ +} ), \end{aligned}$$ $$\begin{aligned} &\omega (\lambda _{1},y): = y^{\lambda - \lambda _{1}} \int _{0}^{\infty } \frac{t^{\lambda _{1} - 1}}{(t + y)^{\lambda }} \,dt\quad (y \in \mathrm{R}_{ +} ). \end{aligned}$$ We have the following expressions: $$\begin{aligned} &\varpi (\lambda _{2},x) = B(\lambda _{2},\lambda - \lambda _{2}) \quad(x \in \mathrm{R}_{ +} ), \end{aligned}$$ $$\begin{aligned} &\omega (\lambda _{1},y) = B(\lambda _{1},\lambda - \lambda _{1})\quad (y \in \mathrm{R}_{ +} ), \end{aligned}$$ where \(B(u,v): = \int _{0}^{\infty } \frac{t^{u - 1}}{(1 + t)^{u + v}} \,dt(u,v > 0)\) is the beta function, such that $$ B(u,v) = \frac{1}{\Gamma (u + v)}\Gamma (u)\Gamma (v). $$ Setting \(u = \frac{t}{x}\), we find $$ \varpi (\lambda _{2},x) = x^{\lambda - \lambda _{2}} \int _{0}^{\infty } \frac{(ux)^{\lambda _{2} - 1}}{(x + ux)^{\lambda }} x\,du = \int _{0}^{\infty } \frac{u^{\lambda _{2} - 1}}{(1 + u)^{\lambda }} \,du = B(\lambda _{2},\lambda - \lambda _{2}), $$ namely, (8) follows. In the same way, we have (9). We have the following reverse Hardy–Hilbert integral inequality involving one derivative function: $$\begin{aligned} &\int _{0}^{\infty } \int _{0}^{\infty } \frac{f(x)g'(y)}{(x + y)^{\lambda }} \,dx \\ &\quad > B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda {}_{2})B^{\frac{1}{q}}( \lambda _{1},\lambda - \lambda {}_{1}) \\ &\qquad{}\times \biggl[ \int _{0}^{\infty } x^{p(1 - \lambda _{1}) - a - 1} f^{p}(x) \,dx\biggr]^{\frac{1}{p}}\biggl[ \int _{0}^{\infty } y^{q(1 - \lambda _{2}) - a - 1} g^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}}. \end{aligned}$$ By the reverse Hölder inequality (cf. [30]), we obtain $$\begin{aligned} &\int _{0}^{\infty } \int _{0}^{\infty } \frac{f(x)g'(y)}{(x + y)^{\lambda }} \,dx\,dy \\ &\quad = \int _{0}^{\infty } \int _{0}^{\infty } \frac{1}{(x + y)^{\lambda }} \biggl[ \frac{y^{(\lambda _{2} - 1)/p}}{x^{(\lambda _{1} - 1)/q}}f(x)\biggr] \biggl[\frac{x^{(\lambda _{1} - 1)/q}}{y^{(\lambda _{2} - 1)/p}}g'(y)\biggr] \,dx\,dy \\ &\quad \ge \biggl\{ \int _{0}^{\infty } \biggl[ \int _{0}^{\infty } \frac{1}{(x + y)^{\lambda }} \frac{y^{\lambda _{2} - 1}\,dy}{x^{(\lambda _{1} - 1)(p - 1)}} \biggr]f^{p}(x)\,dx\biggr\} ^{\frac{1}{p}} \\ &\qquad{}\times \biggl\{ \int _{0}^{\infty } \biggl[ \int _{0}^{\infty } \frac{1}{(x + y)^{\lambda }} \frac{x^{\lambda _{1} - 1}\,dx}{y^{(\lambda _{2} - 1)(q - 1)}} \biggr]g^{\prime \,q}(y)\,dy\biggr\} ^{\frac{1}{q}} \\ &\quad= \biggl[ \int _{0}^{\infty } \varpi (\lambda {}_{2},x) x^{p(1 - \lambda _{1}) - a - 1}f^{p}(x)\,dx\biggr]^{\frac{1}{p}} \\ &\qquad{}\times \biggl[ \int _{0}^{\infty } \omega (\lambda _{1},y) y^{q(1 - \lambda _{2}) - a - 1}g^{\prime \,q}(y)\,dy\biggr]^{\frac{1}{q}}. \end{aligned}$$ If (12) keeps the form of an equality, then there exist constants A and B, such that they are not all zero, satisfying $$ A\frac{y^{\lambda _{2} - 1}}{x^{(\lambda _{1} - 1)(p - 1)}}f^{p}(x) = B\frac{x^{\lambda _{1} - 1}}{y^{(\lambda _{2} - 1)(q - 1)}}g^{\prime \,q}(y)\quad \text{a.e. in }(0, \infty ) \times (0,\infty ). $$ We assume that \(A \ne 0\). For fixed a.e. \(y \in (0,\infty )\), we have $$ x^{p(1 - \lambda _{1}) - a - 1}f^{p}(x) = \biggl(\frac{B}{A}y^{q(1 - \lambda _{2})}g^{\prime \,q}(y) \biggr)x^{ - 1 - a}\quad \text{a.e. in }(0,\infty ). $$ Integration in the above expression, since for any \(a = \lambda - \lambda _{1} - \lambda _{2} \in \mathbf{R}\), \(\int _{0}^{\infty } x^{ - 1 - a}\,dx = \infty \), which contradicts the fact that $$ 0 < \int _{0}^{\infty } x^{p(1 - \lambda _{1}) - a - 1} f^{p}(x) \,dx < \infty. $$ Therefore, by (8) and (9), we have (11). We have the following reverse Hardy–Hilbert-type integral inequality involving one derivative function: $$\begin{aligned} I: ={}& \int _{0}^{\infty } \int _{0}^{\infty } \frac{f(x)g(y)}{(x + y)^{\lambda + 1}} \,dx\,dy \\ >{}& \frac{1}{\lambda } B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}(\lambda _{1},\lambda - \lambda _{1}) \\ &{}\times \biggl[ \int _{0}^{\infty } x^{p(1 - \lambda _{1}) - a - 1} f^{p}(x) \,dx\biggr]^{\frac{1}{p}}\biggl[ \int _{0}^{\infty } y^{q(1 - \lambda _{2}) - a - 1} g^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}}. \end{aligned}$$ In particular, for \(\lambda _{1} + \lambda _{2} = \lambda\) (or \(a = 0\)), we reduce (13) to the following: $$\begin{aligned} &\int _{0}^{\infty } \int _{0}^{\infty } \frac{f(x)g(y)}{(x + y)^{\lambda + 1}} \,dx\,dy \\ &\quad > \frac{1}{\lambda } B(\lambda _{1},\lambda _{2}) \biggl( \int _{0}^{\infty } x^{p(1 - \lambda _{1}) - 1} f^{p}(x) \,dx\biggr)^{\frac{1}{p}}\biggl( \int _{0}^{\infty } y^{q(1 - \lambda _{2}) - 1} g^{\prime \,q}(y) \,dy\biggr)^{\frac{1}{q}}, \end{aligned}$$ where the constant factor \(\frac{1}{\lambda } B(\lambda _{1},\lambda _{2})\) is the best possible. Using (4) and (5), in view of the Fubini theorem (cf. [31]), we find $$\begin{aligned} I &= \frac{1}{\Gamma (\lambda + 1)} \int _{0}^{\infty } \int _{0}^{\infty } f(x)g(y) \biggl[ \int _{0}^{\infty } t^{(\lambda + 1) - 1} e^{ - (x + y)t}\,dt \biggr]\,dx\,dy \\ &= \frac{1}{\Gamma (\lambda + 1)} \int _{0}^{\infty } t^{(\lambda + 1) - 1} \biggl( \int _{0}^{\infty } e^{ - xt}f(x)\,dx\biggr) \biggl( \int _{0}^{\infty } e^{ - yt} g(y)\,dy\biggr) \,dt \\ &= \frac{1}{\Gamma (\lambda + 1)} \int _{0}^{\infty } t^{(\lambda + 1) - 1} \biggl( \int _{0}^{\infty } e^{ - xt}f(x)\,dx\biggr) \biggl( \int _{0}^{\infty } t^{ - 1}e^{ - yt} g'(y)\,dy\biggr) \,dt \\ &= \frac{1}{\lambda \Gamma (\lambda )} \int _{0}^{\infty } \int _{0}^{\infty } f(x)g'(y) \biggl[ \int _{0}^{\infty } t^{\lambda - 1}e^{ - (x + y)t}\,dt \biggr] \,dx\,dy \\ &= \frac{\Gamma (\lambda )}{\lambda \Gamma (\lambda )} \int _{0}^{\infty } \int _{0}^{\infty } \frac{f(x)g'(y)}{(x + y)^{\lambda }} \,dx \,dy. \end{aligned}$$ Then by (11), we have (13). For \(a = 0\) in (13), we have (14). For any \(\varepsilon > 0\), we set $$ \tilde{f}(x): = \textstyle\begin{cases} 0,&0 < x \le 1, \\ x^{\lambda _{1} - \frac{\varepsilon }{p} - 1},&x > 1, \end{cases}\displaystyle \qquad \tilde{g}(y): =\textstyle\begin{cases} 0,&0 < y \le 1, \\ y^{\lambda _{2} - \frac{\varepsilon }{q}},&y > 1. \end{cases} $$ We obtain \(\tilde{g}(y) = o(1)\ (y \to 0^{ +} ),\tilde{g}(y) = o(e^{ty})\ (t > 0;y \to \infty ),\tilde{g}'(y) \equiv 0\ (0 < y < 1)\), and $$ \tilde{g}'(y) = \biggl(\lambda _{2} - \frac{\varepsilon }{q} \biggr)y^{\lambda _{2} - \frac{\varepsilon }{q} - 1}\quad (y > 1). $$ If there exists a constant \(M( \ge \frac{1}{\lambda } B(\lambda _{1},\lambda _{2}))\), such that (14) is valid when replacing \(\frac{1}{\lambda } B(\lambda _{1},\lambda _{2})\) by M, then in particular, by substitution of \(f(x) = \tilde{f}(x),g(y) = \tilde{g}(y)\) and \(g'(y) = \tilde{g}'(y)\), we have $$\begin{aligned} \tilde{I}&: = \int _{0}^{\infty } \int _{0}^{\infty } \frac{\tilde{f}(x)\tilde{g}(y)}{(x + y)^{\lambda + 1}} \,dx\,dy \\ &> M \biggl[ \int _{0}^{\infty } x^{p(1 - \lambda _{1}) - 1} \tilde{f}^{p}(x) \,dx\biggr]^{\frac{1}{p}}\biggl[ \int _{0}^{\infty } y^{q(1 - \lambda _{2}) - 1} \tilde{g}^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}}. \end{aligned}$$ We obtain $$\begin{aligned} \tilde{J}&: = \biggl[ \int _{0}^{\infty } x^{p(1 - \lambda _{1}) - 1} \tilde{f}^{p}(x) \,dx\biggr]^{\frac{1}{p}}\biggl[ \int _{0}^{\infty } y^{q(1 - \lambda _{2}) - 1} \tilde{g}^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}} \\ &= \biggl(\lambda _{2} - \frac{\varepsilon }{q}\biggr) \biggl( \int _{1}^{\infty } x^{ - \varepsilon - 1} \,dx \biggr)^{\frac{1}{p}}\biggl( \int _{1}^{\infty } y^{ - \varepsilon - 1} \,dy \biggr)^{\frac{1}{q}} \\ &= \biggl(\lambda _{2} - \frac{\varepsilon }{q}\biggr) \int _{1}^{\infty } x^{ - \varepsilon - 1} \,dx = \frac{1}{\varepsilon } \biggl(\lambda _{2} - \frac{\varepsilon }{q}\biggr). \end{aligned}$$ In view of the Fubini theorem (cf. [31]), it follows that $$\begin{aligned} \tilde{I} &= \int _{1}^{\infty } \biggl[ \int _{1}^{\infty } \frac{y^{\lambda _{2} - \frac{\varepsilon }{q}}}{(x + y)^{\lambda + 1}} \,dy \biggr]x^{\lambda _{1} - \frac{\varepsilon }{p} - 1}\,dx = \int _{1}^{\infty } x^{ - \varepsilon - 1}\biggl[ \int _{\frac{1}{x}}^{\infty } \frac{u^{\lambda _{2} - \frac{\varepsilon }{q}}}{(1 + u)^{\lambda + 1}} \,du\biggr] \,dx \\ &= \int _{1}^{\infty } x^{ - \varepsilon - 1}\biggl[ \int _{\frac{1}{x}}^{1} \frac{u^{\lambda _{2} - \frac{\varepsilon }{q}}}{(1 + u)^{\lambda + 1}} \,du \biggr]\,dx + \int _{1}^{\infty } x^{ - \varepsilon - 1}\biggl[ \int _{1}^{\infty } \frac{u^{\lambda _{2} - \frac{\varepsilon }{q}}}{(1 + u)^{\lambda + 1}} \,du \biggr]\,dx \\ &= \int _{0}^{1} \biggl( \int _{\frac{1}{u}}^{\infty } x^{ - \varepsilon - 1} \,dx\biggr) \frac{u^{\lambda _{2} - \frac{\varepsilon }{q}}}{(1 + u)^{\lambda + 1}}\,du + \frac{1}{\varepsilon } \int _{1}^{\infty } \frac{u^{\lambda _{2} - \frac{\varepsilon }{q}}}{(1 + u)^{\lambda + 1}} \,du \\ &= \frac{1}{\varepsilon } \biggl[ \int _{0}^{1} \frac{u^{\lambda _{2} + \frac{\varepsilon }{p}}}{(1 + u)^{\lambda + 1}}\,du + \int _{1}^{\infty } \frac{u^{\lambda _{2} - \frac{\varepsilon }{q}}}{(1 + u)^{\lambda + 1}} \,du\biggr]. \end{aligned}$$ By (16), we obtain $$ \int _{0}^{1} \frac{u^{\lambda _{2} + \frac{\varepsilon }{p}}}{(1 + u)^{\lambda + 1}}\,du + \int _{1}^{\infty } \frac{u^{\lambda _{2} - \frac{\varepsilon }{q}}}{(1 + u)^{\lambda + 1}} \,du > \varepsilon \tilde{I} > \varepsilon M\tilde{J} > M\biggl(\lambda _{2} - \frac{\varepsilon }{q}\biggr). $$ Putting \(\varepsilon \to 0^{ +}\) in the above inequality, in view of the continuity of the beta function, we find $$ \frac{\lambda _{2}}{\lambda } B(\lambda _{1},\lambda _{2}) = \frac{\lambda _{2}\Gamma (\lambda _{1})\Gamma (\lambda _{2})}{\lambda \Gamma (\lambda )} = B(\lambda _{1},\lambda _{2} + 1) \ge M \lambda _{2}, $$ namely, \(\frac{1}{\lambda } B(\lambda _{1},\lambda _{2}) \ge M\). Hence, \(M = \frac{1}{\lambda } B(\lambda _{1},\lambda _{2})\) is the best possible constant factor in (14). The theorem is proved. □ We set \(\hat{\lambda }_{1}: = \lambda _{1} + \frac{a}{p} = \frac{\lambda - \lambda _{2}}{p} + \frac{\lambda _{1}}{q},\hat{\lambda }_{2}: = \lambda _{2} + \frac{a}{q} = \frac{\lambda - \lambda _{1}}{q} + \frac{\lambda _{2}}{p}\). It follows that \(\hat{\lambda }_{1} + \hat{\lambda }_{2} = \lambda \). For $$ a = \lambda - \lambda _{1} - \lambda _{2} \in \bigl( - p \lambda _{1},p(\lambda - \lambda _{1})\bigr), $$ we find \(0 < \hat{\lambda }_{1} < \lambda \), and then \(0 < \hat{\lambda }_{2} = \lambda - \hat{\lambda }_{1} < \lambda \). So we rewrite (13) as follows: $$\begin{aligned} I: ={}& \int _{0}^{\infty } \int _{0}^{\infty } \frac{f(x)g(y)}{(x + y)^{\lambda + 1}} \,dx\,dy \\ >{}& \frac{1}{\lambda } B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}(\lambda _{1},\lambda - \lambda _{1}) \\ &{}\times \biggl[ \int _{0}^{\infty } x^{p(1 - \hat{\lambda }_{1}) - 1} f^{p}(x) \,dx\biggr]^{\frac{1}{p}}\biggl[ \int _{0}^{\infty } y^{q(1 - \hat{\lambda }_{2}) - 1} g^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}}. \end{aligned}$$ If the constant factor $$ \frac{1}{\lambda } B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}(\lambda _{1},\lambda - \lambda _{1}) $$ in (13) (or (17)) is the best possible, then \(\lambda _{1} + \lambda _{2} = \lambda \). By (14) (for \(\lambda _{i} = \hat{\lambda }_{i}\ (i = 1,2)\)), since is the best possible constant factor in (17), we have the following inequality: $$ \frac{1}{\lambda } B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}(\lambda _{1},\lambda - \lambda _{1})\ge \frac{1}{\lambda } B(\hat{\lambda }_{1},\hat{ \lambda }_{2}) \quad( \in \mathrm{R}_{ +} ), $$ namely, $$ B(\hat{\lambda }_{1},\hat{\lambda }_{2})\le B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}( \lambda _{1},\lambda - \lambda _{1}). $$ $$\begin{aligned} B(\hat{\lambda }_{1},\hat{\lambda }_{2})& = \int _{0}^{\infty } \frac{u^{\hat{\lambda }_{1} - 1}}{(1 + u)^{\lambda }} \,du \\ &= \int _{0}^{\infty } \frac{1}{(1 + u)^{\lambda }} u^{\frac{\lambda - \lambda _{2}}{p} + \frac{\lambda _{1}}{q} - 1} \,du = \int _{0}^{\infty } \frac{1}{(1 + u)^{\lambda }} \bigl(u^{\frac{\lambda - \lambda _{2} - 1}{p}}\bigr) \bigl(u^{\frac{\lambda _{1} - 1}{q}}\bigr)\,du \\ &\ge \biggl[ \int _{0}^{\infty } \frac{u^{\lambda - \lambda _{2} - 1}}{(1 + u)^{\lambda }} \,du \biggr]^{\frac{1}{p}}\biggl[ \int _{0}^{\infty } \frac{u^{\lambda _{1} - 1}}{(1 + u)^{\lambda }} \,du \biggr]^{\frac{1}{q}} \\ &= B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}( \lambda _{1},\lambda - \lambda _{1}). \end{aligned}$$ It follows that (18) keeps the form of an equality. We observe that (18) keeps the form of an equality if and only if there exist constants A and B, such that they are not all zero and $$ Au^{\lambda - \lambda _{2} - 1} = Bu^{\lambda _{1} - 1}\quad \text{a.e. in }R_{ +} $$ (cf. [30]). Assuming that \(A \ne 0\), it follows that $$ u^{\lambda - \lambda _{2} - \lambda _{1}} = \frac{B}{A}\quad \text{a.e. in }R_{ +}. $$ We have \(a = \lambda - \lambda _{1} - \lambda _{2} = 0\), namely, \(\lambda _{1} + \lambda _{2} = \lambda \). The following statements (i), (ii), (iii) and (iv) are equivalent: Both \(B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}(\lambda _{1},\lambda - \lambda _{1})\) and \(B(\frac{\lambda - \lambda _{2}}{p} + \frac{\lambda _{1}}{q},\frac{\lambda - \lambda _{1}}{q} + \frac{\lambda _{2}}{p})\) are finite and independent of \(p,q\); \(B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}(\lambda _{1},\lambda - \lambda _{1})\) is equal to a single convergent integral $$\begin{aligned} B(\hat{\lambda }_{1},\hat{\lambda }_{2}) = \int _{0}^{\infty } \frac{u^{\hat{\lambda }_{1} - 1}}{(1 + u)^{\lambda }} \,du; \end{aligned}$$ if \(a = \lambda - \lambda _{1} - \lambda _{2} \in ( - p\lambda _{1},p(\lambda - \lambda _{1}))\), then \(\lambda _{1} + \lambda _{2} = \lambda\); the constant factor is the best possible in (13). (i) ⇒ (ii). In view of the assumption and the continuity of the beta function, we find $$\begin{aligned} &B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}( \lambda _{1},\lambda - \lambda _{1}) \\ &\quad= \lim_{p \to 1^{ -}} \lim_{q \to - \infty } B^{\frac{1}{p}}( \lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}(\lambda _{1},\lambda - \lambda _{1}) = B(\lambda _{2}, \lambda - \lambda _{2}), \\ &B(\hat{\lambda }_{1},\hat{\lambda }_{2}) = \lim _{p \to 1^{ -}} \lim_{q \to - \infty } B\biggl(\frac{\lambda - \lambda _{2}}{p} + \frac{\lambda _{1}}{q},\frac{\lambda - \lambda _{1}}{q} + \frac{\lambda _{2}}{p}\biggr) = B(\lambda _{2},\lambda - \lambda _{2}). \end{aligned}$$ Hence, \(B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}(\lambda _{1},\lambda - \lambda _{1})\) is equal to \(B(\hat{\lambda }_{1},\hat{\lambda }_{2})\), which is a single convergent integral. (ii) ⇒ (iii). Suppose that \(B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}(\lambda _{1},\lambda - \lambda _{1})\) is equal to a single convergent integral \(\int _{0}^{\infty } \frac{1}{(1 + u)^{\lambda }} u^{\hat{\lambda }_{1} - 1}\,du ( \in \mathrm{R}_{ +} )\). Then (18) keeps the form of an equality. By the proof of Theorem 2, we have \(\lambda _{1} + \lambda _{2} = \lambda \). (iii) ⇒ (iv). If \(\lambda _{1} + \lambda _{2} = \lambda \), then by Theorem 1, the constant factor $$ \frac{1}{\lambda } B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}(\lambda _{1},\lambda - \lambda _{1}) \biggl( = \frac{1}{\lambda } B(\lambda _{1},\lambda _{2})\biggr) $$ in (13) is the best possible. (iv) ⇒ (i). By Theorem 2, we have \(\lambda _{1} + \lambda _{2} = \lambda \), and then $$\begin{aligned} &B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}( \lambda _{1},\lambda - \lambda _{1}) = B(\lambda _{1},\lambda _{2}), \\ &B\biggl(\frac{\lambda - \lambda _{2}}{p} + \frac{\lambda _{1}}{q},\frac{\lambda - \lambda _{1}}{q} + \frac{\lambda _{2}}{p}\biggr) = B(\lambda _{1},\lambda _{2}). \end{aligned}$$ It follows that both of them are finite and independent of \(p,q\). Hence, the statements (i), (ii), (iii) and (iv) are equivalent. For \(a = 0\) in (11), we have $$\begin{aligned} &\int _{0}^{\infty } \int _{0}^{\infty } \frac{f(x)g'(y)}{(x + y)^{\lambda }} \,dx \\ &\quad > B(\lambda _{1},\lambda {}_{2}) \biggl[ \int _{0}^{\infty } x^{p(1 - \lambda _{1}) - 1} f^{p}(x) \,dx\biggr]^{\frac{1}{p}}\biggl[ \int _{0}^{\infty } y^{q(1 - \lambda _{2}) - 1} g^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}}. \end{aligned}$$ We conform that the constant factor \(B(\lambda {}_{1},\lambda _{2})\) in (19) is the best possible. Otherwise, we would reach a contradiction by (15) (for \(a = 0\)): the constant factor in (14) is not the best possible. Equivalent form and some particular inequalities Inequality (13) is equivalent to the following reverse Hardy–Hilbert-type integral inequality involving one derivative function: $$\begin{aligned} J: = {}&\biggl\{ \int _{0}^{\infty } x^{q(\lambda {}_{1} + a) - a - 1}\biggl[ \int _{0}^{\infty } \frac{g(y)}{(x + y)^{\lambda + 1}} \,dy \biggr]^{q}\,dx\biggr\} ^{\frac{1}{q}} \\ >{}& \frac{1}{\lambda } B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}(\lambda _{1},\lambda - \lambda _{1}) \biggl[ \int _{0}^{\infty } y^{q(1 - \lambda _{2}) - a - 1} g^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}}. \end{aligned}$$ In particular, for \(\lambda _{1} + \lambda _{2} = \lambda\) (or \(a = 0\)), we reduce (20) to the equivalent form of (14) as follows: $$ \biggl\{ \int _{0}^{\infty } x^{q\lambda {}_{1} - 1}\biggl[ \int _{0}^{\infty } \frac{g(y)}{(x + y)^{\lambda + 1}} \,dy \biggr]^{q}\,dx\biggr\} ^{\frac{1}{q}} > \frac{1}{\lambda } B(\lambda _{1},\lambda _{2}) \biggl[ \int _{0}^{\infty } y^{q(1 - \lambda _{2}) - 1} g^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}}, $$ Suppose that (20) is valid. By the reverse Hölder integral inequality (cf. [30]), we have $$\begin{aligned} I &= \int _{0}^{\infty } \bigl[x^{\frac{1}{q} - \lambda _{1} - \frac{a}{p}}f(x)\bigr] \biggl[x^{ - \frac{1}{q} + \lambda _{1} + \frac{a}{p}} \int _{0}^{\infty } \frac{g(y)}{(x + y)^{\lambda + 1}} \,dy\biggr]\,dx \\ &\ge \biggl\{ \int _{0}^{\infty } x^{p(1 - \lambda _{1}) - a - 1} f^{p}(x) \,dx\biggr\} ^{\frac{1}{p}}J. \end{aligned}$$ On the other hand, assuming that (13) is valid, we set $$ f(x): = x^{q(\lambda {}_{1} + a) - a - 1}\biggl[ \int _{0}^{\infty } \frac{g(y)}{(x + y)^{\lambda + 1}} \,dy \biggr]^{q - 1},\quad x \in \mathrm{R}_{ +}. $$ If \(J = \infty \), then (20) is naturally valid; if \(J = 0\), then it is impossible to make (20) valid, namely \(J > 0\). Suppose that \(0 < J < \infty \). By (13), we have $$\begin{aligned} &\int _{0}^{\infty } x^{p(1 - \lambda _{1}) - a - 1} f^{p}(x) \,dx \\ &\quad= J^{q} = I> \frac{1}{\lambda } B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}(\lambda _{1},\lambda - \lambda _{1})J^{q - 1} \biggl[ \int _{0}^{\infty } y^{q(1 - \lambda _{2}) - a - 1} g^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}}, \\ &J = \biggl[ \int _{0}^{\infty } x^{p(1 - \lambda _{1}) - a - 1} f^{p}(x) \,dx\biggr]^{\frac{1}{q}} \\ &\phantom{J }> \frac{1}{\lambda } B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}(\lambda _{1},\lambda - \lambda _{1}) \biggl[ \int _{0}^{\infty } y^{q(1 - \lambda _{2}) - a - 1} g^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}}, \end{aligned}$$ namely, (20) follows, which is equivalent to (13). The constant factor \(\frac{1}{\lambda } B(\lambda _{1},\lambda _{2})\) is the best possible in (21). Otherwise, by (22) (for \(a = 0\)), we would reach a contradiction: that the constant factor in (14) is not the best possible. Replacing x by \(\frac{1}{x}\), and then replacing \(x^{\lambda - 1}f(\frac{1}{x})\) by \(f(x)\) in (13) and (20), by calculation, we have the following. Corollary 1 The following reverse Hardy–Hilbert-type integral inequalities with the non-homogeneous kernel involving one derivative function are equivalent: $$\begin{aligned} &\int _{0}^{\infty } \int _{0}^{\infty } \frac{f(x)g(y)}{(1 + xy)^{\lambda + 1}} \,dx\,dy \\ &\quad > \frac{1}{\lambda } B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}(\lambda _{1},\lambda - \lambda _{1}) \\ &\qquad{}\times \biggl[ \int _{0}^{\infty } x^{p(\lambda _{1} - \lambda ) + a - 1} f^{p}(x) \,dx\biggr]^{\frac{1}{p}}\biggl[ \int _{0}^{\infty } y^{q(1 - \lambda _{2}) - a - 1} g^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}}, \end{aligned}$$ $$\begin{aligned} &\biggl\{ \int _{0}^{\infty } x^{q(\lambda - \lambda {}_{1} - a + 1) + a - 1}\biggl[ \int _{0}^{\infty } \frac{g(y)}{(1 + xy)^{\lambda + 1}} \,dy \biggr]^{q}\,dx\biggr\} ^{\frac{1}{q}} \\ &\quad > \frac{1}{\lambda } B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}(\lambda _{1},\lambda - \lambda _{1}) \biggl[ \int _{0}^{\infty } y^{q(1 - \lambda _{2}) - a - 1} g^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}}. \end{aligned}$$ Moreover, \(\lambda _{1} + \lambda _{2} = \lambda\) (or \(a = 0\)) if and only if the constant factor \(\frac{1}{\lambda } B^{\frac{1}{p}}(\lambda _{2},\lambda - \lambda _{2})B^{\frac{1}{q}}(\lambda _{1},\lambda - \lambda _{1})\) in (23) and (24) is the best possible. For \(\lambda _{1} + \lambda _{2} = \lambda\) (or \(a = 0\)), we have the following reverse equivalent inequalities with the non-homogeneous kernel and the best possible constant factor \(\frac{1}{\lambda } B(\lambda _{1},\lambda _{2})\): $$\begin{aligned} &\int _{0}^{\infty } \int _{0}^{\infty } \frac{f(x)g(y)}{(1 + xy)^{\lambda + 1}} \,dx\,dy \\ &\quad > \frac{1}{\lambda } B(\lambda _{1},\lambda _{2}) \times \biggl( \int _{0}^{\infty } x^{ - p\lambda _{2} - 1} f^{p}(x) \,dx\biggr)^{\frac{1}{p}}\biggl[ \int _{0}^{\infty } y^{q(1 - \lambda _{2}) - 1} g^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}}, \end{aligned}$$ $$\begin{aligned} &\biggl\{ \int _{0}^{\infty } x^{q(\lambda {}_{2} + 1) - 1}\biggl[ \int _{0}^{\infty } \frac{g(y)}{(1 + xy)^{\lambda + 1}} \,dy \biggr]^{q}\,dx\biggr\} ^{\frac{1}{q}} \\ &\quad > \frac{1}{\lambda } B(\lambda _{1},\lambda _{2}) \biggl[ \int _{0}^{\infty } y^{q(1 - \lambda _{2}) - 1} g^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}}. \end{aligned}$$ For \(\lambda _{1} = \frac{\lambda }{r},\lambda _{2} = \frac{\lambda }{s}\ (r > 1,\frac{1}{r} + \frac{1}{s} = 1)\) in (14), (21), (25) and (26), we have the following two couples of reverse equivalent integral inequalities with the same best possible constant factor \(\frac{1}{\lambda } B(\frac{\lambda }{r},\frac{\lambda }{s})\): $$\begin{aligned} &\int _{0}^{\infty } \int _{0}^{\infty } \frac{f(x)g(y)}{(x + y)^{\lambda + 1}} \,dx\,dy \\ &\quad > \frac{1}{\lambda } B\biggl(\frac{\lambda }{r},\frac{\lambda }{s}\biggr) \biggl[ \int _{0}^{\infty } x^{p(1 - \frac{\lambda }{r}) - 1} f^{p}(x) \,dx\biggr]^{\frac{1}{p}}\biggl[ \int _{0}^{\infty } y^{q(1 - \frac{\lambda }{s}) - 1} g^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}}, \end{aligned}$$ $$\begin{aligned} &\biggl\{ \int _{0}^{\infty } x^{\frac{q\lambda }{r} - 1}\biggl[ \int _{0}^{\infty } \frac{g(y)}{(x + y)^{\lambda + 1}} \,dy \biggr]^{q}\,dx\biggr\} ^{\frac{1}{q}} \\ &\quad > \frac{1}{\lambda } B\biggl( \frac{\lambda }{r},\frac{\lambda }{s}\biggr) \biggl[ \int _{0}^{\infty } y^{q(1 - \frac{\lambda }{s}) - 1} g^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}}; \end{aligned}$$ $$\begin{aligned} &\int _{0}^{\infty } \int _{0}^{\infty } \frac{f(x)g(y)}{(1 + xy)^{\lambda + 1}} \,dx\,dy \\ &\quad > \frac{1}{\lambda } B\biggl(\frac{\lambda }{r},\frac{\lambda }{s}\biggr) \biggl( \int _{0}^{\infty } x^{ - \frac{p\lambda }{s} - 1} f^{p}(x) \,dx\biggr)^{\frac{1}{p}}\biggl[ \int _{0}^{\infty } y^{q(1 - \frac{\lambda }{s}) - 1} g^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}}, \end{aligned}$$ $$\begin{aligned} &\biggl\{ \int _{0}^{\infty } x^{q(\frac{\lambda }{s} + 1) - 1}\biggl[ \int _{0}^{\infty } \frac{g(y)}{(1 + xy)^{\lambda + 1}} \,dy \biggr]^{q}\,dx\biggr\} ^{\frac{1}{q}} \\ &\quad > \frac{1}{\lambda } B\biggl( \frac{\lambda }{r},\frac{\lambda }{s}\biggr) \biggl[ \int _{0}^{\infty } y^{q(1 - \frac{\lambda }{s}) - 1} g^{\prime \,q}(y) \,dy\biggr]^{\frac{1}{q}}. \end{aligned}$$ In particular, for \(\lambda = 1,r = s = 2\), we have $$\begin{aligned} &\int _{0}^{\infty } \int _{0}^{\infty } \frac{f(x)g(y)}{(x + y)^{2}} \,dx\,dy > \pi \biggl( \int _{0}^{\infty } x^{\frac{p}{2} - 1} f^{p}(x) \,dx\biggr)^{\frac{1}{p}}\biggl( \int _{0}^{\infty } y^{\frac{q}{2} - 1} g^{\prime \,q}(y) \,dy\biggr)^{\frac{1}{q}}, \end{aligned}$$ $$\begin{aligned} &\biggl\{ \int _{0}^{\infty } x^{\frac{q}{2} - 1}\biggl[ \int _{0}^{\infty } \frac{g(y)}{(x + y)^{2}} \,dy \biggr]^{q}\,dx\biggr\} ^{\frac{1}{q}} > \pi \biggl( \int _{0}^{\infty } y^{\frac{q}{2} - 1} g^{\prime \,q}(y) \,dy\biggr)^{\frac{1}{q}}; \end{aligned}$$ $$\begin{aligned} &\int _{0}^{\infty } \int _{0}^{\infty } \frac{f(x)g(y)}{(1 + xy)^{2}} \,dx\,dy > \pi \biggl( \int _{0}^{\infty } x^{ - \frac{p}{2} - 1} f^{p}(x) \,dx\biggr)^{\frac{1}{p}}\biggl( \int _{0}^{\infty } y^{\frac{q}{2} - 1} g^{\prime \,q}(y) \,dy\biggr)^{\frac{1}{q}}, \end{aligned}$$ $$\begin{aligned} &\biggl\{ \int _{0}^{\infty } x^{\frac{3q}{2} - 1}\biggl[ \int _{0}^{\infty } \frac{g(y)}{(1 + xy)^{2}} \,dy \biggr]^{q}\,dx\biggr\} ^{\frac{1}{q}} > \pi \biggl( \int _{0}^{\infty } y^{\frac{q}{2} - 1} g^{\prime \,q}(y) \,dy\biggr)^{\frac{1}{q}}. \end{aligned}$$ In this paper, following [21, 23], by the use of weight functions, the idea of introducing parameters, the reverse extension of (1) and the technique of real analysis, a reverse Hardy–Hilbert-type integral inequality with the kernel \(\frac{1}{(x + y)^{\lambda + 1}}(\lambda > 0)\) involving one derivative function and the beta function is given in Theorem 1. The equivalent statements of the best possible constant factor related to several parameters are considered in Theorem 3. The equivalent form, the cases of non-homogeneous kernel and a few particular inequalities are obtained in Theorem 4, Corollary 1 and Remark 3. The lemmas and theorems provide an extensive account of this type of inequalities. We declare that the data and material in this paper can be used publicly. Hardy, G.H., Littlewood, J.E., Polya, G.: Inequalities. Cambridge University Press, Cambridge (1934) Yang, B.C.: The Norm of Operator and Hilbert-Type Inequalities. Science Press, Beijing (2009) Yang, B.C.: Hilbert-Type Integral Inequalities. Bentham Science Publishers, The United Arab Emirates (2009) Yang, B.C.: On the norm of an integral operator and applications. J. Math. Anal. Appl. 321, 182–192 (2006) Xu, J.S.: Hardy–Hilbert's inequalities with two parameters. Adv. Math. 36(2), 63–76 (2007) Yang, B.C.: On the norm of a Hilbert's type linear operator and applications. J. Math. Anal. Appl. 325, 529–541 (2007) Xie, Z.T., Zeng, Z., Sun, Y.F.: A new Hilbert-type inequality with the homogeneous kernel of degree −2. Adv. Appl. Math. Sci. 12(7), 391–401 (2013) Zhen, Z., Raja Rama Gandhi, K., Xie, Z.T.: A new Hilbert-type inequality with the homogeneous kernel of degree −2 and with the integral. Bull. Math. Sci. Appl. 3(1), 11–20 (2014) Xin, D.M.: A Hilbert-type integral inequality with the homogeneous kernel of zero degree. Math. Theory Appl. 30(2), 70–74 (2010) Azar, L.E.: The connection between Hilbert and Hardy inequalities. Arch. Inequal. Appl. 2013, 452 (2013) Batbold, T., Sawano, Y.: Sharp bounds for m-linear Hilbert-type operators on the weighted Morrey spaces. Math. Inequal. Appl. 20, 263–283 (2017) Adiyasuren, V., Batbold, T., Krnic, M.: Multiple Hilbert-type inequalities involving some differential operators. Banach J. Math. Anal. 10, 320–337 (2016) Adiyasuren, V., Batbold, T., Krni´c, M.: Hilbert-type inequalities involving differential operators, the best constants and applications. Math. Inequal. Appl. 18, 111–124 (2015) Rassias, M.T., Yang, B.C.: On half-discrete Hilbert's inequality. Appl. Math. Comput. 220, 75–93 (2013) Yang, B.C., Krnic, M.: A half-discrete Hilbert-type inequality with a general homogeneous kernel of degree 0. J. Math. Inequal. 6(3), 401–417 (2012) Rassias, M.T., Yang, B.C.: A multidimensional half-discrete Hilbert-type inequality and the Riemann zeta function. Appl. Math. Comput. 225, 263–277 (2013) Rassias, M.T., Yang, B.C.: On a multidimensional half-discrete Hilbert-type inequality related to the hyperbolic cotangent function. Appl. Math. Comput. 242, 800–813 (2013) Huang, Z.X., Yang, B.C.: On a half-discrete Hilbert-type inequality similar to Mulholland's inequality. Arch. Inequal. Appl. 2013, 290 (2013) Yang, B.C., Lebnath, L.: Half-Discrete Hilbert-Type Inequalities. World Scientific, Singapore (2014) Krnic, M., Pecaric, J.: Extension of Hilbert's inequality. J. Math. Anal. Appl. 324(1), 150–160 (2006) Adiyasuren, V., Batbold, T., Azar, L.E.: A new discrete Hilbert-type inequality involving partial sums. Arch. Inequal. Appl. 2019, 127 (2019) Mo, H.M., Yang, B.C.: On a new Hilbert-type integral inequality involving the upper limit functions. Arch. Inequal. Appl. 2020, 5 (2020) Hong, Y., Wen, Y.M.: A necessary and sufficient condition of that Hilbert type series inequality with homogeneous kernel has the best constant factor. Ann. Math. 37A(3), 329–336 (2016) Hong, Y.: On the structure character of Hilbert's type integral inequality with homogeneous kernel and applications. J. Jilin Univ. Sci. Ed. 55(2), 189–194 (2017) Hong, Y., Huang, Q.L., Yang, B.C., Liao, J.L.: The necessary and sufficient conditions for the existence of a kind of Hilbert-type multiple integral inequality with the non-homogeneous kernel and its applications. Arch. Inequal. Appl. 2017, 316 (2017) Xin, D.M., Yang, B.C., Wang, A.Z.: Equivalent property of a Hilbert-type integral inequality related to the beta function in the whole plane. J. Funct. Spaces 2018, Article ID 2691816 (2018) Hong, Y., He, B., Yang, B.C.: Necessary and sufficient conditions for the validity of Hilbert type integral inequalities with a class of quasi-homogeneous kernels and its application in operator theory. J. Math. Inequal. 12(3), 777–788 (2018) Liao, J.Q., Wu, S.H., Yang, B.C.: On a new half-discrete Hilbert-type inequality involving the variable upper limit integral and the partial sum. Mathematics 8, 229 (2020). https://doi.org/10.3390/math8020229 Wang, Z.X., Guo, D.R.: Introduction to Special Functions. Science Press, Beijing (1979) Kuang, J.C.: Applied Inequalities. Shangdong Science and Technology Press, Jinan (2004) Kuang, J.C.: Real and Functional Analysis (Continuation), vol. 2. Higher Education Press, Beijing (2015) The authors thank the referee for a useful proposal to reform the paper. This work is supported by the National Natural Science Foundation (No. 61772140), and Science and Technology Planning Project Item of Guangzhou City (No. 201707010229). We are grateful for their help. Department of Computer Science, Guangdong University of Education, Guangzhou, Guangdong, 51003, China Department of Mathematics, Guangdong University of Education, Guangzhou, Guangdong, 51003, China Bicheng Yang BY carried out the mathematical studies, participated in the sequence alignment and drafted the manuscript. QC participated in the design of the study and performed the numerical analysis. All authors read and approved the final manuscript. Correspondence to Qian Chen. Chen, Q., Yang, B. A reverse Hardy–Hilbert-type integral inequality involving one derivative function. J Inequal Appl 2020, 259 (2020). https://doi.org/10.1186/s13660-020-02528-0 Weight function Hardy–Hilbert-type integral inequality Derivative function Beta function
CommonCrawl
Multi-strategic RNA-seq analysis reveals a high-resolution transcriptional landscape in cotton ChIP-Hub provides an integrative platform for exploring plant regulome Liang-Yu Fu, Tao Zhu, … Dijun Chen Lasy-Seq: a high-throughput library preparation method for RNA-Seq and its application in the analysis of plant responses to fluctuating temperatures Mari Kamitani, Makoto Kashima, … Atsushi J. Nagano Pervasive misannotation of microexons that are evolutionarily conserved and crucial for gene function in plants Huihui Yu, Mu Li, … Chi Zhang Integrative analysis of reference epigenomes in 20 rice varieties Lun Zhao, Liang Xie, … Xingwang Li Reconstructing the maize leaf regulatory network using ChIP-seq data of 104 transcription factors Xiaoyu Tu, María Katherine Mejía-Guerra, … Silin Zhong An integrated transcriptome mapping the regulatory network of coding and long non-coding RNAs provides a genomics resource in chickpea Mukesh Jain, Juhi Bansal, … Rohini Garg Integrative genome-wide analysis reveals the role of WIP proteins in inhibition of growth and development Maria Victoria Gomez Roldan, Farhaj Izhaq, … Abdelhafid Bendahmane Reference gene and small RNA data from multiple tissues of Davidia involucrata Baill Hua Yang, Chengran Zhou, … Yun Zhao De-novo transcriptome analysis unveils differentially expressed genes regulating drought and salt stress response in Panicum sumatrense Rasmita Rani Das, Seema Pradhan & Ajay Parida Kun Wang ORCID: orcid.org/0000-0001-5910-15681 na1, Dehe Wang ORCID: orcid.org/0000-0002-9465-09571,2 na1, Xiaomin Zheng1, Ai Qin1, Jie Zhou1, Boyu Guo1, Yanjun Chen1, Xingpeng Wen3, Wen Ye4, Yu Zhou ORCID: orcid.org/0000-0002-2102-93771,2,3 & Yuxian Zhu ORCID: orcid.org/0000-0003-2371-59331,3 Transcriptomics Cotton is an important natural fiber crop, however, its comprehensive and high-resolution gene map is lacking. Here we integrate four complementary high-throughput techniques, including Pacbio long read Iso-seq, strand-specific RNA-seq, CAGE-seq, and PolyA-seq, to systematically explore the transcription landscape across 16 tissues or different organ types in Gossypium arboreum. We devise a computational pipeline, named IGIA, to reconstruct accurate gene structures from the integrated data. Our results reveal a dynamic and diverse transcriptional map in cotton: tissue-specific gene expression, alternative usage of TSSs and polyadenylation sites, hotspot of alternative splicing, and transcriptional read-through. These regulated events affect many genes in various aspects such as gain or loss of functional RNA motifs and protein domains, fine-tuning of DNA binding activity, and co-regulation for genes in the same complex or pathway. The methods and findings provide valuable resources for further functional genomic studies such as understanding natural SNP variations for plant community. Cotton fiber, an excellent model for studying cell elongation and cell wall biosynthesis, is the principal natural source for textile industry. Thus, cotton is one of the most important fiber crop plants world-wide and has long been one of major focuses of plant research1. Recently, four different cotton genomes have been successfully sequenced and assembled, including two allotetraploid (AADD) Gossypium hirsutum2,3,4, and Gossypium barbadense3,4 genomes, and two ancestor diploid Gossypium raimondii (DD)1,5 and Gossypium arboreum (AA)6,7 genomes. In recent years, transcriptomic profiling analysis has also been reported for a close relative of G. arboreum, Levant cotton (Gossypium herbaceum)8. The heterogeneity of RNAs produced from different promoter usage, alternative splicing (AS) and polyadenylation significantly increases the transcriptome repertoire that produces the plasticity of the proteome. Accumulating evidence supports that regulation of alternative transcripts play pivotal functions in eukaryote development, tissue identity9, and response to environmental stress10. In plants, high-throughput RNA-seq analysis has shown that alternative promotor usage, splicing and polyadenylation in Arabidopsis and other crops10,11 are significantly more prevalent than previously expected. Functional studies found that transcription start site (TSS) selection produces truncated proteins which change their subcellular localization11, and that regulated alternative splicing controls plant flowering12 and affects microRNA biogenesis13. Transcription end site (TES) selection and alternative polyadenylation (APA) are also known to regulate plant flowering14. Therefore, regulation of RNA transcription and processing significantly affects multiple aspects of plant growth and development. However, currently available gene models in cotton genomes are mostly derived from computational prediction or assembled only from short reads RNA sequencing data, which are incomplete or even inaccurate. Specifically, the Cottongen15 and CGP6 gene annotations for the AA genome contain only coding sequences, essentially missing other functional regions, such as 5′UTRs and 3′UTRs (UTR, Untranslated Region). Incompleteness of the cotton transcriptome hinders the investigation of molecular mechanisms for various biological processes. The next-generation sequencing (NGS)-based RNA-seq which generates millions of short reads, is often used for gene expression profiling; however, the method cannot identify accurate full-length transcripts, and potential amplification biases can be introduced during library construction16. Pacbio sequencing offers long reads, with average read lengths over 10 kb, and has been applied to profile the transcriptomes of human17, maize18, sorghum19, and tetraploid cotton (G. hirsutum)20 with the Iso-seq protocol. However, Pacbio sequencing is hindered by lower throughput, higher error rate (11–15%) and higher cost for identifying transcript isoforms, as in a recent review21. The hybrid sequencing, which integrates the strengths of NGS and Pacbio sequencing, has improved AS identification22, however, for accurate identification of the 5′ and 3′ ends of transcripts (TSS and TES, respectively), the hybrid sequencing remains inadequate. The most reliable and precise methods for identifying 5′ and 3′ ends can still be best achieved with NGS-based Cap Analysis Gene Expression (CAGE-seq)23 and PolyA-seq24. The computational method GRIT reconstructed gene models from RNA-seq short reads data by integrating CAGE-seq and PolyA-seq data for the Drosophila genome, and has obtained better results than Cufflinks with only RNA-seq data25. To systematically exploit the transcription landscape of G. arboreum, a diploid cotton species, here we combine four complementary high-throughput techniques: Pacbio Iso-seq for directly reading full-length isoforms, deep-depth strand-specific RNA-seq (ssRNA-seq) for quantifying expression and splicing, as well as specialized CAGE-seq and PolyA-seq for accurately defining the transcriptional initiation and polyadenylation sites, across 16 tissues including ovules at four developmental stages. We develop an efficient computational pipeline to take full advantage of each technique and generate a high-resolution transcript map. We discover and validate different modes of gene expression regulation in cotton development including alternative promoter usage, splicing hotspot and microexon switch, polycistron, and alternative polyadenylation site selection. The results from this study provide a highly reliable panoramagram of the transcription output in G. arboreum, which builds a foundation for studying phenotypic and functional variations in cotton. Multi-strategic RNA-seq for high-resolution RNA landscape In previous studies, we assembled the G. arboreum genome6,7. To systematically identify complete and accurate gene models including 5′UTRs and 3′UTRs and capture splicing isoforms for G. arboreum, we prepared RNA from 16 tissues, including anther, stigma, petal, bract, sepal and whole flower at 0 days post-anthesis (DPA), phloem, leaf, seedling root, seedling stem, cotyledon, seed, and ovules at four developmental stages (0, 5, 10, and 20 DPA) (Supplementary Fig. 1a). Pacbio long reads sequencing was performed on mixed RNA samples from the 16 tissues reaching close to the saturated depth based on computational sampling analysis (Supplementary Fig. 1b), which yielded 3,332,714 reads of insert (ROI) filtered from 64 SMRT cells, in a total 92.8 Gb size of 5,204,697 raw reads. For each of the 16 samples, we also performed ssRNA-seq in biological duplicates, generating a total of 1,517 million clean reads (455.1 Gb in size). To identify accurate TSSs and TESs, we carried out CAGE-seq (on average 83.1 million clean reads per sample) and PolyA-seq (on average 68.8 million clean reads per sample) for each of the 16 tissues. The statistics of all sequencing libraries is summarized in Supplementary Table 1–3 and Supplementary Data 1. To take advantage of each of the high-throughput technique, we developed an integrative analysis pipeline to build transcript models. For regions supported with Pacbio long reads, we utilized the developed tool termed IGIA (Integrative Gene Isoform Assembler), while for regions without long reads, we applied customized TACO pipeline recently developed26 (Fig. 1a). In the IGIA method, gene elements including TSSs, TESs, and introns, were identified from CAGE-seq, PolyA-seq and ssRNA-seq reads, respectively. Then, the polished long reads were segmented based on the identified elements. Importantly, we validated each long read on every element based on short reads, corrected errors and completed partial isoforms whenever possible, and classified them into different categories of isoforms (Fig. 1b and Supplementary Fig. 1c). Integrating the IGIA core isoforms (IsoF and IsoR) and IsoC, and the isoforms from the adjusted TACO pipeline, we identified a total of 94,170 transcripts from 36,826 genes (Supplementary Data 2, 3). Among IGIA cotton genes, 56.7% have only one transcript, and 17,101 genes have two or more transcripts (Fig. 1c). Our IGIA gene set had a large fraction overlapping with the previous CGP6 and Cottongen15 annotations, and contained more than 3400 new genes (Fig. 1d). For genes not annotated in IGIA, their expressions were extremely low or their lengths were too short (Fig. 1e). Integrative multi-strategic RNA-seq for the high-resolution RNA landscape in G. arboreum. a Experimental design and analysis workflow for Integrative Gene Isoform Annotation (IGIA). b Schematic illustration of IGIA strategy for identifying accurate isoforms. See details in Methods section. c Distribution of the number of isoforms per gene. d Venn diagram of gene annotations comparisons between CGP, Cottongen and IGIA. e Distribution of FPKM, gene length, and number of lowly expressed genes in subgroups of genes. f The deviation between TSS (left, CAGE-seq) and TES (right, PolyA-seq) peaks compared with those assembled from IGIA and other methods. g The number of unique splicing junctions only supported by ToFU, CGP, TACO, Cottongen, and IGIA. h Length distribution of the 5′UTR, CDS, 3′UTR, exon, and intron, from IGIA annotation for cotton compared with seven other species. The median lengths and significances of difference were marked (*p-value < 0.05; **p-value < 0.01; ***p-value < 0.001). i SNP distribution on composite gene body (top) and exon (bottom) of IGIA genes. The source data underlying Fig. 1e are provided as a Source Data file For assessing the 5′ and 3′ annotations in IGIA, we computed the distances of annotated TSSs and TESs from different pipelines with peaks identified by CAGE-seq and PolyA-seq, respectively (Fig. 1f). Because IGIA utilized the information from CAGE-seq and PolyA-seq, the IGIA gene set had the smallest deviation compared with the peaks. The cumulative fraction curves show that the transcripts from ToFU27 annotations with Pacbio long reads have higher resolution for 5′ and 3′ ends than TACO, StringTie and Cufflinks, indicating that short reads sequencing data alone are insufficient for identifying accurate TSSs and TESs, especially TESs. Next, we assessed the annotated splicing junctions in this study, by comparing with four sets of junction sites (JSs), including predictions from ToFU, TACO, and the CGP and Cottongen annotations (Supplementary Fig. 1d and Supplementary Data 4, 5). Among the JSs, 99.3% (121,912/122,791) of IGIA JSs were also supported from other predictions (Supplementary Fig. 1d), and IGIA had the fewest number of specific JSs (Fig. 1g), indicating less false positives. Notably, both CGP and Cottongen had a considerable number of JS annotations that were different from those deduced with IGIA (Supplementary Fig. 1d). To evaluate these JS differences, 174 pairs of different JSs were randomly selected in the genes with varying expresssion levels (0–25%, 25–50%, 50–75%, and 75–100%). The Sanger sequencing results from the two comparison groups (84 for CGP vs. IGIA and 90 for Cottongen vs. IGIA, respectively) showed that about 98% of the investigated JSs exactly match with IGIA annotations (Supplementary Fig. 2a, Supplementary Data 6, 7). The representative cases are shown in Supplementary Fig. 2b–f. In addition, the transcript set assembled using ToFU with only Pacbio long reads revealed 86,181 unique JSs which, however, were unsupported by any other method, and the JSs actually contained many boundary errors (74.26%) and bubble errors (20.74%) (Supplementary Fig. 1d right bottom), two common types of errors in Pacbio sequencing (Supplementary Fig. 1e). This, to a large extent, explains the higher number of JSs identified using Pacbio than NGS RNA-seq, a problem also been noted in a previous report19. In summary, these evaluations showed the IGIA pipeline discovered more reliable and complete JS annotations than the predictions from ToFU or TACO and previous CGP and Cottongen annotations for the G. arboreum genome. Based on the IGIA gene structure annotations, we characterized and compared multiple genomic features, such as length of UTRs, CDS, exons, and introns in G. arboreum and other seven genomes for protein-coding genes (Supplementary Table 4). The overall length features of G. arboreum were closer to other plants than animal genomes, as expected. However, the G. arboreum genome had unique features including longer 5′UTRs, wider range of 5′ and 3′UTR lengths than Arabidopsis and rice, and longer introns than Arabidopsis. Interestingly, compared with two other cotton species, the G. arboreum genes had wider range of 5′ and 3′UTR lengths (Fig. 1h), although the discrepancy might result from incomplete 5′UTR and 3′UTR annotations in other genomes. To explore natural sequence variations on these functional elements, we analyzed the distribution of 17,883,108 high-quality single nucleotide polymorphisms (SNPs) from 230 G. arboreum lines7 in IGIA genes and different categories of elements including 5′UTR, CDS, 3′UTR, and introns. As expected, genic regions possessed significantly fewer SNP mutations than intergenic regions, and the CDS regions had the lowest natural variation frequency (1.65 SNP/kb) (Supplementary Fig. 3a). Notably, sharp transitions of SNP density were observed around TSSs and TES at the gene level, and exon-intron boundary, as well as UTR-CDS boundary at the mRNA level (Fig. 1i and Supplementary Fig. 3b). An obvious valley was observed around 30–40 nt upstream of the TSS (Fig. 1i top), where TATA box motif is enriched. These results reflect the accuracy of IGIA annotation of TSSs, TESs, and splicing JSs. Gene expression dynamics across 16 tissues in G. arboreum Based on the improved gene annotation, we next attempted to build a spatio-temporal dynamic transcriptome atlas for G. arboreum. We used our ssRNA-seq data to quantify gene expression across 16 tissues including ovules at four developmental stages (0, 5, 10, and 20 DPA) in two biological replicates. The average correlation coefficients between biological replicates for all tissues were greater than 0.95, indicating high quality and reproducibility of our data (Supplementary Fig. 4). We computed the FPKM (number of Fragments Per Kb per Million mapped reads) values for IGIA genes across all tissues (Supplementary Data 8), and investigated their similarity and specificity. Three groups of tissues were discovered based on the expression correlation matrix (Supplementary Fig. 5a): the three reproductive tissues including stigma, anther, and petal; four developmental stages of ovules and seed; other vegetative tissues and the whole flower. These results fit well with our expectation: the ovules at different developmental stages show the most similar gene expression, while vegetative tissues and reproductive tissues differ significantly, thus being grouped separately. To provide a global view on gene expression across 16 tissues, we made a maximum-value normalized expression heatmap clustered by gene, revealing a considerable number of constitutively expressed genes (Supplementary Fig. 5b). The ubiquitously expressed genes in more than 12 tissues (10,919) accounted for almost 42.46% of expressed genes (Supplementary Fig. 5c). To identify tissue-specific genes, we used an entropy-based metric (see Methods) to quantify the tissue specificity of all genes (Supplementary Fig. 5d). A total of 5702 genes with score larger or equal to 1 were identified as tissue-specific (Supplementary Fig. 5d and Supplementary Data 9). Hierarchical clustering of these genes showed the male reproductive organ (anther) had the largest number of tissue-specific genes. The homologous genes GhMYB25, GhHOX3, and other factors important for fiber initiation and elongation in Gossypium hirsutum28, showed an obvious specific expression patterns in ovule tissues (highlighted box in Supplementary Fig. 5d). To further validate the reliability of our data, we randomly selected 16 genes from each tissue-specific group (Supplementary Fig. 5e top) and measured their expression using quantitative polymerase chain reaction (qPCR) across 16 tissues (each with three replicates, 768 reactions in total). The results (Supplementary Fig. 5e bottom) were highly consistent with our ssRNA-seq data. As exemplified by the vacuolar amino acid transporter gene (Supplementary Fig. 5f), the expression signals from ssRNA-seq across 16 tissues clearly revealed an ovule-specific gene, which may have an important function during cotton ovule early development. To finely resolve the gene expression of fiber at 5, 10, and 20 DPA, the fibers were stripped from the epidermal layer of the ovule for ssRNA-seq (Supplementary Fig. 6a and Supplementary Fig. 4). The expression profiles of six samples were integrated with the above 16 tissue samples for t-SNE plot analysis. The ovule, ovule without fiber (ovule-F), stripped fiber (Fiber), and seed were clustered into one group and clearly separated from other tissues (Supplementary Fig. 6b). Then, all the genes (22,674, FPKM ≥ 1) expressed in the ovule, ovule-F, and fiber, were analyzed to create a heatmap (Supplementary Fig. 6c). Based on the expression trends in fiber from 5 to 20 DPA, the genes were categorized into three groups: Up, Down and Others (Supplementary Data 10). The homologous genes GhMYB25, GhHOX3, and other factors28 associated with fiber development were also included. Most genes belonged to the Up group. Six genes, including putative GaEX1 (Ga10g01583) and GaHOX3 (Ga12g00054), were selected for qPCR validation (Supplementary Data 11). The obvious tissue-specific expression of the genes was observed in ovule (0–20 DPA) and fiber (5–20 DPA) (Supplementary Fig. 6d). Due to the fiber separation from ovule, the expression specificity of the genes was more pronounced in pure fiber tissue (Supplementary Fig. 6c, d). In addition, in situ hybridization was used to detect the transcriptional expression of Ga03g00156, which encodes an ATP-dependent DNA helicase or nuclease. Ga03g00156 had strong hybridization signals in the fibers and epidermis of outer integument of ovules as shown in Supplementary Fig. 6e, further confirming its tissue specificity revealed from above RNA-seq data. This comprehensive and accurate gene expression atlas in G. arboreum provides a valuable resource for future studies on cotton tissue specification and development. Multiple TSSs and alternative promoter usage CAGE-seq, designed to stringently select for 5′ complete RNA molecules, has been used to identify TSSs at single-base resolution23. In the present study, approximately 83 million CAGE-seq reads on average for each of the 16 tissues were successfully mapped to the G. arboreum genome, which generated 91,707 TSS clusters using the program paraclu29. Requiring an average of TPM (number of Tags Per Million mapped) >0.5 across all tissues, we identified 44,728 TSS clusters, corresponding to 22,863 gene loci (Supplementary Data 12, 13), of which 38.4% had two or more TSS clusters (Fig. 2a). Multiple transcription initiation and alternative promoter usage in G. arboreum. a Statistics of transcription start site (TSS) per gene. b High resolution of TSS identification from CAGE-seq. c Statistics of TSS usage signal in genes with multiple TSSs including distal- (DT), proximal- (PT) and coding-TSS (CT). d Statistics of 5′ end changes and the distribution of changes in CDS length caused by alternative TSS usage. Potential functional RNA elements in alternative 5′UTR regions including RBP binding, U-rich, RG4, 2nd structure, and uORFs. The occurrences of events and associated genes are shown. e Example of a gene with long (L) and short (S) isoforms due to alternative TSS usage (arrows). The sequence encoding the transmembrane (TM) helix domain is marked in blue. CAGE-seq and RNA-seq signals are shown in purple and gray, respectively. f 5′ RACE validation of the change in the use of TSS. g Predicted 3D protein structures for the long and short isoforms (top: side-view, bottom: top-view). The lost TM helix domains caused by alternative TSS is highlighted in red. h The comparison of 15NO3− uptaking activity between NRT-L, NRT-S, and empty vector cell lines (two-tailed t-test, n = 3, error bar represents s.d.). The significance levels are indicated by asterisks (*p-value < 0.05; **p-value < 0.01; ***p-value < 0.001), and the median values in box plots are shown. The source data underlying Fig. 2b–d, 2f, and 2h are provided as a Source Data file To estimate the accuracy of the CAGE-seq data, we defined the cluster resolution as the minimum-size interval that accounts for more than 80% of the total signal in the cluster. We assessed the cumulative distribution of CAGE signal across all 16 tissues (Fig. 2b) and found that 14.5% of TSS clusters had single-base resolution and 71.6% of TSS clusters had 10-base resolution. To validate the TSSs, we randomly selected 55 genes to perform the 5′ rapid amplification of cDNA ends (5′RACE) assay in a pooled RNA sample of 16 tissues. The PCR results showed the expected bands in 47 genes (positive ratio = 85.5%) (Supplementary Fig. 7 and Supplementary Data 14). In addition, multiple classical promoter motifs, including Initiator (Inr), TATA box and Y Patch (pyrimidine patch)30 in plants, were found enriched near the identified TSSs (Supplementary Fig. 8a). Together, these results reflect the high quality of our CAGE-seq data. Multiple TSSs in one gene locus would create the potential to rewire the transcription by using an alternative TSS. Next, we examined the genome-wide TSS usage pattern in cotton. Multiple TSSs in one gene were classified into three types: distal TSS (DT), proximal TSS (PT), and TSS in the coding region (CT) (Fig. 2c left). We found that the DT were more likely to be chosen (Supplementary Data 15) and expressed at much higher level than PT and CT (two-tail Wilcoxon rank sum test, p-value < 2.2e-16) (Fig. 2c right). Our data revealed that alternative promoter usage could alter the length of 5′UTR or protein N-terminus in 5888 and 2800 genes, respectively (Fig. 2d top). In alternative 5′UTR regions, we detected enrichment of potential functional RNA elements, including 8387 miRNA binding sites, 6377 U-rich elements, 5,370 upstream open reading frames (uORFs), and 129 RNA G-quadruplexes (RG4), each of which may affect mRNA stability or translational efficiency31 (Fig. 2d bottom and Supplementary Data 16). The miRNA binding, U-rich and uORF contributed the most to the identified motifs, indicating these alternative regions of the 5′UTRs might have important regulatory roles in post-transcriptional regulation of gene expression. Next, we analyzed the consequences on protein for a set of 2800 proteins with alternative TSSs in coding regions. Most of the alternative TSSs, while not causing a frame shift, induced N-terminal truncation with different lengths up to 500 amino acids (Fig. 2d), which would lead to losses of N-terminal subcellular localization signals or protein domains based on predictions (Supplementary Data 16, 17). In Arabidopsis, through differential TSS usage, the genes encoding for monodehydro ascorbate reductase (MDAR), stromal ascorbate peroxidase (sAPX)32, and glycerate kinase (GLYK)11, have been shown to produce both full-length and truncated proteins to target different subcellular compartments. Based on our data, their orthologs in G. arboreum also possessed dual promoters and exhibited differential TSS usage among tissues, indicating the genes may be subjected to similar transcriptional regulation for subcellular localization in cotton (Supplementary Data 17). Further, we analyzed the TSS usage dynamics across 16 tissues and uncovered 6,207 genes with promoter switches between tissues. Interestingly, we found that TSS selection might control the existence or loss of structure domain(s) for genes involved in ABA signaling (protein phosphatase PP2C, Ga06g00624), auxin transport (ABC transporter, Ga14g01845), nitrate transport (NRT1.2, Ga12g01164), and membrane signal transduction (leucine-rich repeat receptor-like protein kinase, Ga11g00886) (Supplementary Data 16). Figure 2e shows a representative example of our data, Ga12g01164, a homolog of AtNRT1.2 encoding nitrate transporter. Our ssRNA-seq and CAGE-seq are consistent with the possibility that ovules at 0 and 5 DPA mainly utilize CT TSS to express a short mRNA NRT-S, whereas late stage ovules (10 and 20 DPA) and other tissues switch to DT TSS to express a longer mRNA NRT-L (Fig. 2e). Indeed, our 5′RACE assay results confirmed the TSS switch (Fig. 2f). The NRT1.2 has 12 trans-membrane (TM) domains, but due to the change from distal to proximal TSS, NRT-S encodes an N-terminal truncated version with loss of the four TM domains, which might induce a protein conformational change from the compact to relaxed state for the TM cluster based on 3D structure modeling (Fig. 2g). For comparing their activity on NO3-transport, we performed ectopic expression for NRT-L and NRT-S in HEK-293 cells to test the isotope 15NO3- uptake. The results from the assays at 2 and 8 h further confirmed their difference in the activity of NO3- uptake, for which NRT-S was significantly lower than NRT-L (Fig. 2h). These results imply that at early developmental stage (0 and 5 DPA), the ovule might have unique nitrogen transport and metabolism, distinct from other tissues. The above results indicate that differentially regulated alternative TSSs are a common feature in cotton mRNAs, which usually generate alternative N-termini in mRNA or protein to regulate tissue specification and development. Developmentally regulated TES selection Alternative TES selection or APA generates mRNAs with various 3′ ends that differ either in coding sequence or in 3′UTRs, which contributes to the transcriptome complexity by regulating mRNA stability, localization, and translation efficiency33. In cotton, APA has not yet been systematically investigated. Based on the 3′ ends information from PolyA-seq, we profiled genome-wide TESs for all expressed genes in 16 tissues. From a total of 1.1 × 109 PolyA-seq reads mapped to the cotton genome, we generated 70,502 TES clusters, of which 43,237 TES clusters had an average TPM > 0.5, corresponding to 23,736 gene loci (Supplementary Data 18, 19). Among all expressed genes in cotton, 40.2% had at least two TESs (Fig. 3a), on average, 1.56 polyA sites per gene, less than the 74.9% in Arabidopsis34 but close to the 47.9% in rice35. Similar to the TSS analysis mentioned above, we next assessed the cumulative distribution of polyA site resolution across all 16 tissues, showing that 36.4% of TES clusters had single-base resolution and 76.9% of TES clusters had 10-base resolution (Fig. 3b). Multiple transcription termination and alternative 3′UTR usage in G. arboreum. a Statistics of transcription end site (TES) per gene. b Cumulative curve of base resolution of the TES cluster signal from PolyA-seq. c The nucleotide composition around the TES. d Statistics of TES usage in genes with multiple TESs including distal- (DT), proximal- (PT) and coding-TESs (CT). e Statistics of effect on RNA (pie chart) and the distribution of change of CDS length due to the use of an alternative TES. f Potential functional RNA elements in the alternative 3′UTRs including RBP binding, U-rich, and RG4 motifs. g Average 3′UTR length across tissues (error bar represents s.e.m.). h Number of genes with alternative TES usage between pairwise tissues. i Genes with changes in APA between ovule (20 DPA) and seed (left), and two representative gene examples (right). The significance levels are indicated by asterisks (*p-value < 0.05; **p-value < 0.01; ***p-value < 0.001), and the median values in box plots are shown. The source data underlying Fig. 3b–h are provided as a Source Data file To further confirm the reliability of deduced TESs, we scanned the canonical 3′ end processing motifs, including CFlm, U-rich, and PAS36, all which were indeed enriched around the TESs as expected (Supplementary Fig. 8b). In addition, the nucleotide (nt) composition of sequences ±50 nt around the TES was analyzed, and a typical A/U enrichment pattern was found at the PolyA site, similar to previously reported results in Arabidopsis37 and rice35, but not in control sequences (Fig. 3c). These results indicate our TES annotation is reliable and of high resolution. To examine the APA dynamics across tissues in cotton, we quantified the TES usage from our PolyA-seq data and identified 10,681 genes exhibiting APA (Supplementary Data 18). To differentiate the pattern of TES usage, we classified multiple TESs in single genes into three types: distal TES (DT), proximal TES (PT), and TES in the coding region (CT). Unlike the situation in transcription initiation, the distal TESs (DT) tended to be used less frequently (Supplementary Data 20) than PT, however, DT and PT were expressed at much higher levels than the CT TESs (Fig. 3d). This is highly reminiscent of previous studies in rice35, but different with that in mice24, in which distal TESs appeared to be used more frequently and express at a higher level than PT. This intriguing phenomenon indicates different preference in plants and animals for the use of alternative TESs. While most of the 3′ end variations (9432 of 10,681, 87.5%) were limited to 3′UTR, only 8.3% of variations occurred between coding regions and the 3′UTRs; 3.4% would change coding RNA to lncRNA by breaking the open reading frame (ORF) (Fig. 3e). To further dissect the effect of varying 3′UTR lengths, we analyzed the composition of alternative 3′UTR regions and found an enrichment of RNA functional motifs (Fig. 3f and Supplementary Data 21), including miRNA binding site, U-rich elements, and RG4. Next, we examined shortening or lengthening patterns of the 3′UTRs across 16 tissues, and found that the average length of 3′UTRs showed great variations among different tissues (Fig. 3g and Supplementary Data 22). In particular, gradual shortening was observed during ovule development from 0 DPA to 20 DPA, which then exhibited a sudden reversal to lengthening of the 3′UTR at the final ovule developmental stage, mature seed (dashed box in Fig. 3g). This reproductive development associated variation in 3′UTR length has been reported in mouse spermatogenesis with 3′UTRs being the shortest in spermatids38, and in C. elegans germline with a bias toward proximal TES in males39. The gradual shortening followed by lengthening transition in cotton may be of interest for future studies. To gain insight into the APA differences from our genome-wide data, we identified the genes with 3′UTR switching between pairwise tissues, as shown in the heatmap (Fig. 3h). The anther, stigma and petal (tissue ID 1–3) showed relative low number of APA switching events compared with other tissues, however, seed and ovules (tissue ID 4–8) exhibited larger numbers of switching cases compared with vegetative tissues (ID 9–11,14–16). Most APA switching cases involved in ovule and seed, implying APA in cotton may play an important role in fiber development and seed maturation. Among the 1,899 mRNAs expressed in both ovule (at 20 DPA) and seed, 189 genes exhibited APA switching with robust differences (Fig. 3i). In these cases, more genes used the distal polyA sites in seed, such as Ga07g00163, encoding histidine kinase, which might play a role in signal transduction across the cellular membrane40. Conversely, some genes, including Ga05g01873, encoding a BTB domain containing protein potentially modulating chromatin structures41, tended to use the proximal polyA sites in seed. Collectively, based on the results, a high-resolution map of PolyA sites in cotton was built, and their sequence features and dynamic regulation during development and tissue specification were revealed. Dynamic splicing switch and microexons in cotton Alternative splicing (AS) in cotton has been investigated in G. raimondii (DD genome)42 and G. hirsutum (AADD genome)20, but not yet in the G. arboreum AA genome. Based on the IGIA annotation, we performed a systematic analysis of AS in 23,451 multi-exon genes of G. arboreum. In total, we identified 23,756 AS events in 42.1% multi-exon genes (Supplementary Data 23–26), ~2.4 AS events on average in genes with AS (Fig. 4a). Alternative splicing regulation and hotspots in G. arboreum. a Statistics of four categories of AS events and associated genes identified in 16 tissues using IGIA annotation. A3SS, alternative 3′ splice site; A5SS, alternative 5′ splice site; RI, intron retention; SE, exon skipping. b Effects of AS events on protein functional domains. c Size distribution of internal exons. Dashed line: 51 nt threshold. d A microexon (ME) located within the AP2 DNA binding domain and conserved in plants. e ME validation using RT-PCR. I, inclusion; S, skipping. The bar chart reflects the exon inclusion ratio across 16 tissues (three replicates, error bar represents s.d.). f EMSA assay showing two isoforms w/o ME, I, and S, have different DNA binding affinities. g Representative gene example with AS hotspot (highlighted in light green color) and RT-PCR validation for the splicing products from the "cold" region (CR) and hotspot (HS). h Simulation of AS hotspots in representative plant and animal genomes. i Length comparison of exons and introns from HSs and CRs. The significance levels are indicated by asterisks (*p-value < 0.05; **p-value < 0.01; ***p-value < 0.001). The source data underlying Fig. 4e–h are provided as a Source Data file Our results revealed that intron retention (RI) constituted the majority (62.2%) of all AS events in G. arboreum, which is the highest among all reported plants. This result has also been found in two other Gossypium species. The percentages of three other patterns of AS events were 18.8%, 11.3%, and 7.7% for alternative 3′ splice site (A3SS), alternative 5′ splice site (A5SS), and exon skipping (SE), respectively (Fig. 4a). We further characterized the impact of AS events on protein-coding sequences, especially various functional domains, and found that AS in cotton greatly affects the integrity of protein domains in Pfam and CDD, and sequences with disordered regions, short linear motifs (Slim), and trans-membrane (TM) features (Fig. 4b and Supplementary Data 27–30). Genes with different AS events were randomly selected to test the accuracy of the AS annotation using RT-PCR (Supplementary Fig. 9a-d and Supplementary Data 31). The highly dynamic splicing switches across different tissues from ssRNA-seq were consistent with our RT-PCR findings. The AS quantification for all the IGIA genes based on our ssRNA-seq data shows comprehensive AS dynamic profiles across tissues in G. arboreum (Supplementary Data 23–26). Next, two special types of transcription coupling events were investigated, intronic TSS and intronic TES switching, linking AS and alternative promoters or polyadenylation, respectively24. The intronic TSS (or TES) in the skipped terminal exon (Is event) or in composite terminal exon (Ic event) will produce or eliminate a splicing junction, respectively (Supplementary Fig. 10a). Genome-wide survey showed the coupling events between transcription initiation (or polyadenylation) and splicing occurred extensively in cotton, and the Ic event was more widely present in the transcriptome at either 5′ end or 3′ end than the Is event (Supplementary Fig. 10b). Several gene loci of the cotton genome contained both Ic and Is events. Two gene examples with mixed types of events at the 5′ end and 3′ end are presented in Supplementary Fig. 10c, d, showing the gene complexity contributed by intronic TSS or TES. Microexons (MEs) are a particular class of exons, and their lengths are usually as short as 3 nt43. Here we comprehensively characterized MEs in cotton G. arboreum. Similar to humans43, the length distribution of internal exons showed a sharp decrease around 50 nt (Fig. 4c), and we defined the length threshold for MEs as 51 nt in cotton as previously reported. By scanning internal exons equal or less than 51 nt, 7784 MEs were identified from IGIA transcripts (Supplementary Data 32). Figure 4d shows a 45 nt ME in Ga05g01029 encoding the ethylene-responsive transcription factor RAP2-7 protein, an AP2 domain-containing transcription factor. This ME encodes part of the β-sheet and α-helix of the AP2 domain. Comparing the gene structures of the orthologs from gymnosperm selaginella moellendorffii to rice and Arabidopsis, the sequence and length of this ME were highly conserved, although the length of other exons in this gene varied greatly among different plant genomes (Fig. 4d). The RT-PCR assays showed the AS of this ME had an obvious tissue-specific patterns in cotton, as shown by the exon inclusion ratios across tissues (Fig. 4e). Furthermore, we examined the DNA binding activity of the RAP2-7 protein with ME skipping (S) or inclusion (I) using an EMSA assay. Figure 4f showed that at the same protein concentration, AP2-I could bind the DNA probe while AP2-S almost failed to bind, indicating the transcription factor RAP2-7 might vary its binding activity to the genome in different tissues via regulated splicing of a microexon, a common mechanism shared by animals and plants. In addition, several regions in certain genes showed highly enriched AS events, which we termed as AS hotspots (Fig. 4g, highlighted light green box). Based on a statistical model of AS hotspots, we scanned the cotton genome with a high stringency criterion, and detected 135 genes that contained at least one AS hotspot (Supplementary Fig. 10e–g). Gene ontology (GO) analysis of the genes showed the molecular functions were mainly enriched in ATP and RNA binding proteins and protein kinases (Supplementary Fig. 10h). Further analysis on the protein features of AS hotspots indicated that most hotspots affected a conserved protein domain (Supplementary Data 33–35). For example, in protein kinase shkB-like, the AS events tended to concentrate in a hotspot encoding the MAPK kinase domain. The different frequencies of AS in the "cold" region and hotspot for this gene were further confirmed using RT-PCR (Fig. 4g, right). Applying the same computational model, we found AS hotspot also existed in other plants, such as rice and Arabidopsis, but not in humans and mice (Fig. 4h). This indicates the AS hotspot might be a plant-specific splicing phenomenon, probably due to plant-specific cis-elements and trans-factors. Intriguingly, comparison of gene structures between hotspots and cold regions showed the AS hotspots tended to have shorter exons than cold regions, but not the same for introns (Fig. 4i). In summary, our analysis provides comprehensive information on the alternative splicing landscape in G. arboreum, which will be of great value in addressing the regulation mechanism and function of alternative splicing in cotton. Discovery of polycistrons and their genomic features Polycistronic transcription, the co-transcription of more than one open reading frame (ORF) or gene from one single promoter to make a polycistronic mRNA, is pervasive in prokaryotes and fungi27, but rare in eukaryotes. In plants, a rare example was observed in tomato, a bicistronic unit for glutamyl kinase and glutamyl phosphate reductase loci, which resembled prokaryotic polycistronic operons44. Analysis of our IGIA annotation for full-length transcripts in G. arboreum indicated many adjacent loci in the cotton genome exhibited polycistronic transcription spanning two or more ORFs, which were annotated as independent monocistronic genes in Cottongen and CGP. Figure 5a and Supplementary Fig. 11 show a representative tricistron and bicistron, respectively (Supplementary Data 36). Identification and genomic features of the polycistrons in G. arboreum. a An example of polycistron with three genes supported by the Pacbio long reads. The genes in the polycistron could be independently transcribed and have independent TSSs and TESs. The probes for RT-PCR validation of the polycistron expression are marked below IGIA transcripts. b RT-PCR validation of polycistron transcripts from transcriptional read-through. Four pairs of probes are indicated. c Statistics of the distances between CDS pairs in a polycistron compared with the distances between a CDS and its closest CDS in IGIA genes. d The number of polycistrons with two and three CDSs identified using IGIA genes. e Expression correlation between CDS pairs in a polycistron and random CDS pairs. f–h Comparison of GO similarity (f), proportion in the same KEGG pathway (g), and potential protein–protein interactions (h) between CDS pairs in a polycistron and random CDS pairs. i Co-linear analysis of cotton polycistrons in 54 plant genomes. The significance levels are indicated by asterisks (*p-value < 0.05; **p-value < 0.01; ***p-value < 0.001), and error bar represents s.d. value. The source data underlying Fig. 5b and 5f–i are provided as a Source Data file In Fig. 5a, the tricistron spans three genes, Pectinesterase 1-like, PolyA binding protein 2, and Glutamate dehydrogenase 2 three genes, and the bicistron spans the former two genes. Results from RT-PCR and Sanger sequencing showed the polycistronic transcripts did have two isoforms with lengths of 4.2 and 2.8 kb, due to an alternative intron retention event. The TSS and TES signals from mixed tissues indicated the three genes might have their own transcription units and exhibit different expression patterns with the polycistron, which were supported by our ssRNA-seq and RT-PCR validation (Fig. 5b). The tissue-specific expression of the polycistron implied that it may be activated under the control of tissue specification signals. Based on a stringent criterion, 1115 polycistrons (1081 bi- and 34 tri- cistronic transcripts) in G. arboreum were identified (Fig. 5c and Supplementary Data 37), corresponding to 5% of all annotated loci in this study. To understand potential mechanisms and functions of polycistrons, we investigated several genomic features in these polycistrons and control genes. The distances between ORFs inside the polycistron (between the stop codon of the first ORF and the start codon of the second ORF) were significantly shorter than between monocistronic genes (Fig. 5d). Genes from the same polycistron showed stronger co-expression correlation across tissues than random controls (Fig. 5e). In addition, genes in the same polycistronic transcript tended to have similar GO-terms, locate in the same KEGG pathways, and interact with each other (Fig. 5f–h, respectively), probably due to better coordination through their co-transcription. To investigate the origin of polycistron, we performed co-linear analysis for the polycistrons with other plant genomes. The results (Fig. 5i) indicated that many polycistrons have co-linearity with the gene loci in a large number of plants. The co-linearity of some polycistrons exist in more than 40 species, the conservation of which indicates polycistrons may play important regulatory roles. In a recent Arabidopsis study, under the salt stress or in CTD PHOSPHATASE-LIKE 4 knockdown lines, transcriptional read-through from a small nuclear RNA to its downstream gene was activated to produce a polycistron45. Similarly, read-through was considered to be wasteful or harmful transcription of intergenic sequences, which may interfere with transcription of downstream genes46. Our results provide evidence supporting polycistron or transcriptional read-through is a widespread phenomenon in cotton growth and development, indicating that polycistronic transcription is not due to transcriptional abnormality, but instead a gene expression regulatory strategy used in plants. We have compiled a landscape of genes for cotton G. arboreum with accurate TSSs, TESs, splicing sites, and complete collection of isoforms, as well as their dynamic profiles across multiple tissues. This comprehensive and systematic transcriptome study integrated multiple RNA-seq techniques and will provide an important reference for cotton and plant researchers. The emerging new RNA sequencing technologies require a continuous update of analysis tools to further improve the annotation accuracy of the transcriptome. Inspired by the GRIT algorithm25, IGIA integrated CAGE-seq and PolyA-seq to determine the gene start and end boundaries, respectively. For gene body regions, GRIT only used short reads data from RNA-seq to algorithmically reconstruct isoform structures, which was challenging16, while IGIA took the full-length structures from Pacbio long read as isoform skeleton, and used short reads from RNA-seq to correct local errors in the long reads, thus avoiding the error-prone reconstruction only from short reads. Integrating the four modalities of RNA-seq data, IGIA showed good performance on gene annotation supported by various experimental validations in this study. In this study, IGIA tends to detect isoforms with specific differences on TSSs or TESs (Supplementary Fig. 12a, b), and with reliable expression signals (Supplementary Fig. 12c, d). Requiring multiple evidences on splicing junctions, IGIA aims to identify reliable gene structures, accompanied by potential underestimation of isoforms. IGIA would be further optimized to reduce computation time and memory requirement, and to improve the error-correction capability on long reads to enhance its sensitivity. In previous genome-wide association studies (GWAS), many agronomic trait-associated sites in cotton have been identified, however, non-CDS polymorphisms are difficult to interpret or prioritize for an in-depth study. With our complete catalogues of functional elements, including newly available 5′ UTRs and 3′ UTRs, researchers can make novel hypotheses for further dissection of important sequence variants outside the CDS. In our investigation, we integrated 932 GWAS sites on the AA genome of G. arboreum (Supplementary Data 38) and observed that many GWAS sites were localized in regulatory regions including transcription initiation and termination sites, alternative 5′/3′ UTRs, and constitutive or alternative splicing (AS) sites (Supplementary Fig. 3c and Supplementary Table 5). This provides an important foundation for further decoding the molecular mechanism from these GWAS sites to the important agronomic traits, such as fiber length and quality in cotton. Aberrant use of one promoter over another in humans has been reported to cause various diseases, including cancer47. However, in plants, there has been limited reports on identifying precise TSSs, only in maize48 and Arabidopsis11,30. In the present study, we have identified 91,707 TSS clusters for 26,763 gene loci, almost 40% of which had multiple promoters, indicating extensive usage of alternative promoters in the cotton genome. To date, the functional significance of alternative promoters and their roles in plant development have largely been unexplored, with the rare exception of the well-characterized Arabidopsis gene, GLYK11 and BES149. In this study, 2800 genes with different protein isoforms (CDS change) caused by alternative TSS selection in cotton G. arboreum (Fig. 2d) were identified, including the genes whose Arabidopsis homologs were reported to target different subcellular localization by use of an alternative promoter. Moreover, the effects of TSS selection on protein length might not only be limited to the subcellular targeting signal sequence in N-termini, but also change protein structure and function (e.g. loss of TM structure in nitrate transporter, NRT1.2, which might attenuate the NO3- uptake activity.) (Fig. 2f–h). These results broaden the understanding of plant TSS selection, thus providing new research perspectives for plant scientists. The cis-elements, which are commonly present in UTRs of mRNAs, can regulate the RNA stability, translation efficiency, transport, and subcellular localization of the translated proteins50. In this study, 5888 and 9432 genes were identified in which the selection of TSS and TES could produce distinct 5′UTRs and 3′UTRs, respectively. Both of the variable regions enrich different functional elements. U-rich motifs have been reported to act as control signal to regulate 3′-end processing36, and as cis-elements to target mRNA to cytoplasmic processing-body during ethylene signaling in Arabidopsis51. In the present study, the U-rich motif was found widely present in the 5′UTR and 3′UTR, indicating the motif might have an important role in post-transcriptional regulation. In addition, results from the present study revealed uORFs frequently occur in the variable regions of the 5′UTR due to use of an alternative TSS. uORFs often function as translation repressors of their downstream major ORFs52. In a recent study, a TSS switching to shortened 5′UTR caused skipping of an uORF, evading the uORF-mediated translational inhibition and/or mRNA decay53. Our analysis showed 19.6% of genes in G. arboreum contained small stretches of uORFs in the 5′UTR of mRNAs, of which 25.1% (1780) were located in alternative 5′UTRs, indicating TSS selection-controlled uORF translational repression could be a pervasive regulation mode in plants. Recently, Wang et al. performed Pacbio-seq and studies alternative splicing and polyadenylation in an allotetraploid cotton (G. barbadense)20. The authors found ~4.8 AS events and 2.82 polyA sites per gene, which are larger than those numbers (2.4 and 1.56) in G. arboreum, probably due to the conservativeness of IGIA in identifying isoforms or greater complexity of RNA processing in tetraploid cotton than diploid cotton. In this study, important phenomena, including AS hotspot and polycistronic transcription, were discovered. The AS hotspot-containing genes tended to encode proteins with functions enriched in ATP binding, RNA binding, and receptor protein kinase (Supplementary Fig. 10h), and AS hotspots tended to be located in the conserved protein domains including RNA recognition motifs (such as RRM and PPR), protein interaction domains (such as WD40 and BTB/POD), and protein kinase activation regions. For example, in the RNA binding protein, the high variability in the AS hotspot region may increase the plasticity for expanding the RNA recognition range or fine-tuning its affinity. However, the biogenesis and functions of AS hotspots and polycistrons in plants are both issues that warrant further investigations. Plant materials The diploid AA genome cotton G. arboreum cv. Shixiya1 (SXY1) was grown in soil mixture in a fully automated greenhouse, which mimicked the natural environment for cotton growth. For RNA sequencing, the anther, stigma, petal, bract, sepal and whole flower were collected at 0 DPA. The phloem and leaf tissues were collected at the flowering stage. The root, stem and cotyledon were collected from seedling plants when growing out two fully expanded leaves. The four ovule samples were respectively collected from the ovary of flowers at four developmental stages (0, 5, 10, and 20 DPA). The samples of fiber and ovule (without fiber) were manually separated from ovules at corresponding development stages. The dry seeds were collected from matured cotton bolls. All tissues were immediately frozen in liquid nitrogen and stored at -80 °C until RNA extraction. Library construction and sequencing Total RNA was isolated using a modified protocol as described54. For Pacbio sequencing, the RNA samples from 16 tissues were pooled to construct the cDNA library using the SMARTer PCR cDNA synthesis kit according to the manufacturer's instructions. Next, the cDNA library was size-fractioned to 0.5–1, 1–2, 2–3 Kb, and >3 Kb and single-molecule sequencing on Pacbio RS II was performed using P6-C4 kit with 240 min of movie time for each flow cell, respectively (Nextomics Biosciences Institute, Wuhan, China). The ssRNA-seq, CAGE-seq, and PolyA-seq for 16 tissues of RNA samples were performed on Illumina Hiseq 3000 platform with PE150 mode. For ssRNA-seq, the 16 tissues of total RNA samples with two biological replicates were first depleted of rRNAs with Ribo-Zero rRNA Removal Kit (Epicentre, Madison, WI, USA), and then the strand-specific cDNA libraries were prepared according to a published protocol55, and finally constructed into sequencing libraries using NEBNext UltraTM Directional RNA Library Prep Kit (New England Biolabs, USA) for Illumina. The CAGE-seq and PolyA-seq libraries for each tissue were constructed according to RAMPAGE23 and 3′READS+56 protocols, respectively. For removing PCR duplication in CAGE-seq and PolyA-seq, the adapter primers were modified and random 6xN barcodes were inserted. For ssRNA-seq, the raw reads were filtered to remove the adaptors and low-quality bases using Trimmomatic (v0.36). Filtered reads were first aligned to the cotton genome7 using STAR (v2.5.3a) in end-to-end mode to scan splice junctions. Then the reads were realigned to the genome with splice junctions from the 16 tissues. The FPKM values and counts in different genomic features were calculated using StringTie (v1.3.3) and subread (v1.5.3), respectively. For Pacbio data, due to the high rates of indels and mismatches, the long subreads were corrected with ssRNA-seq data by program proovread. The corrected reads were then aligned to the genome using the STARlong program with the following parameters: --outSAMattributes NH HI NM MD --readNameSeparator space --outFilterMultimapScoreRange 1 --outFilterMismatchNmax 2000 --alignTranscriptsPerWindowNmax 10000 --scoreGapNoncan -20 --scoreGapGCAG -4 --scoreGapATAC -8 --scoreDelOpen -1 --scoreDelBase -1 --scoreInsOpen -1 --scoreInsBase -1 --alignEndsType Local --seedSearchStartLmax 50 --seedPerReadNmax 100000 --seedPerWindowNmax 1000 --alignTranscriptsPerReadNmax 100000. For CAGE-seq data, the adapters in raw reads were removed using Cutadapt (v1.13). The 5′ random barcode in Read 1 and GGGG close to the barcode were trimmed, and only the reads with insert sequence length longer than 75 nt were used. Then, the potential rRNAs and PCR duplicated reads were removed according to the signature of barcode and the mapped genomic position from STAR in end-to-end mode. Reads with a mapping quality score (MAPQ) ≥20 were used for further analysis. For PolyA-seq, the adapters in raw reads were removed using Trimmomatic. The 5′ random barcode in Read1 and Read2, polyA stretch in Read1 and polyT stretch in Read2 were trimmed. Only the reads with insert sequence length ≥50 nt and polyT length ranging from 11 to 15 nt were used. The potential rRNAs and duplicated reads from PCR were removed based on the signature of barcode and the mapped genomic position using STAR in end-to-end mode. Reads with a MAPQ ≥20 were used for downstream analysis. The TSS and TES reads, from CAGE-seq and PolyA-seq respectively, were clustered using the program paraclu29 (parameters: minValue 20, -l 200, -d 5). The clusters across different tissues were merged. Potential TSS and TES clusters with TPM ≥0.5 in any of the 16 tissues and with the largest signal ≥10 in at least three tissues were considered as reliable TSS and TES clusters. The summit having the largest signal in a cluster was considered a TSS or TES position, which was used for reconstructing IGIA transcripts. IGIA pipeline For potential gene locus from linkage analysis, individual elements including TSS, exon, intron, and TES, were first identified from all the sequencing data. To simplify the gene models and speed up construction of isoforms for multiple TSS clusters closer to each other, 400 nt cutoff was chosen to merge clusters based on the statistical analysis about the relationship curve (where second-order difference close to zero) between the remaining cluster number with varying distance cutoffs (Supplementary Fig. 12a). The position with the highest signal in the merged cluster was chosen as the TSS candidate for downstream analysis. The same cutoff was chosen for TES cluster based on the statistical analysis in Supplementary Fig. 12b. Isoform matrix was built by projecting Pacbio full-length reads into segmented Bins (0 for intron, 1 for exon). Each row, a potential isoform, from the isoform matrix was examined for the evidences of splicing junctions, TSS, and TES. Fully supported isoforms were classified as isoF, partial isoforms were completed as isoC, and isoforms with conflicts to those evidences were rescued with faithful elements as isoR if possible, or otherwise discarded. In the IGIA pipeline, each transcript was defined as the individual path of genomic region split by TSSs, TESs, and splice junctions. First, the entire genome was scanned with 100 bp window and divided into linkage groups with ssRNA-seq and/or Pacbio Iso-seq signal. For each linkage group, ssRNA-seq and Pacbio reads were used to identify credible junctions with sufficient reads support. Then, the TSSs, TESs, and validated splice junctions were mapped onto genome to divide the genome into indivisible segments. The region containing TSS or TES was termed the "TSS or TES segment". Next, Pacbio reads were mapped into segments and validated, filtered and corrected to obtain the isoform path using the junction evidences. For each Pacbio isoform path, the corresponding transcript was termed a full-length isoform (isoF) if it started with TSS segment, ended with TES segment, and all splice junctions were supported. If the isoform path missed at least one of the TSS or TES segment, all splice junctions were supported, and it could be completed using the isoF path that matched its boundary segment, then the corresponding transcript was termed a complete isoform (isoC). For reads that could not be completed by full-length isoform, they were merged with complementary reads as merged isoform (isoM), if there was no conflict in their overlap region. If at least one junction in this isoform path could not be supported by the data, this error was corrected in conjunction with the isoF path. The corresponding transcript was termed a rescued isoform (isoR) if it could be corrected. For a long read with unsupported junctions not rescued by the above methods, we removed those wrong junctions and filled the gaps by enumerating exons at the loci identified from NGS RNA-seq; the resulted isoform was termed partial isoform (isoP). TACO pipeline For these genes without Pacbio long read support, revised TACO assembly pipeline was used to reconstruct transcripts from ssRNA-seq reads. First, TACO was performed to obtain the original version of gene annotation26. Based on our analysis for expression (FPKM) of single-exon and multi-exon fragments identified in TACO (Supplementary Fig. 12c,d), FPKM = 0.5 was chosen as the cutoff to filter approximately 50% of low expressed single-exon fragments (Supplementary Fig. 12c). Moreover, because multi-exon fragments have longer length and higher average expression, FPKM = 0.2 was chosen as the cutoff to filter approximately 5% of multi-exon fragments (Supplementary Fig. 12d). Thus, the lowly expressed fragments or noise signals were excluded for further analysis. If there was a TSS or TES cluster within 500 bp from a transcript boundary, the boundary was fixed by this cluster. The transcripts from this customized TACO pipeline were termed as NGS-based isoform (isoN). IGIA and TACO integration Two methods described above were to construct the IGIA gene assembly. For regions with Pacbio support, the gene assembly was predicted using the IGIA pipeline. To obtain a complete cotton gene annotation, the gene locus without Pacbio coverage was supplemented with TACO assembly and modified TSSs and TESs. IsoF and isoR were treated as the core IGIA annotation due to their highest reliability. The IGIA and TACO transcripts were collectively referred to as the complete IGIA annotation. Nearly 50% of the genes <1 kb were not covered by Pacbio reads, which accounted for 58% of the genes not hit by Pacbio. For genes 1–3 and >3 kb in length, Pacbio reads covered 85% and 95% of genes, respectively. For each IGIA isoform, coding probability was predicted using cpat (v1.2.2). The isoform with coding probability ≥50% was classified as coding isoform, and the longest ORF predicted using ORFfinder (v0.4.3) of NCBI was utilized as the CDS region. Gene cluster was built using the UCSC Genome Browser tools. A variety of methods were used to annotate genes. blastp (v2.2.31+) was used to annotate homologous genes in NR, UniProt Sprot, KOG, and TAIR databases. Interproscan (v5.25–64.0) was also used to predict gene functional domains. Conserved domains were searched using rpsblast (v2.2.31+) with the NCBI CDD. KEGG orthology pathway identification was performed using KEGG software57. The protein-protein interaction (PPI) network was obtained based on homology alignment of the STRING database (Arabidopsis thaliana v10.5)58. For each transcript, the hit with lowest E-value was considered its functional annotation. The annotation of the longest transcript for a gene was used as the annotation for the gene. Comparative analysis of gene annotations IGIA annotation was compared with existing gene annotations and current methods. NGS-based transcripts were predicted using Cufflinks (v2.2.1), Stringtie, and TACO (v0.7.2) with default parameters. Pacbio-based assembly was predicted using Pacbio ToFU pipeline (v2.2.1) with default parameters. Because Cottongen gene annotation was not available for new versions of the cotton genome G. arboreum, the Cottongen sequences15 were aligned to the new genome as gene annotations using STARlong with the same parameters as on the Pacbio Iso-seq data. Cottongen and CGP assemblies were selected as current annotations. For junction comparison, junctions in different gene annotations that overlapped but did not appear in the other's annotation were defined as conflicting junctions and were used for subsequent experimental validation. Expression and tissue specificity analysis The IGIA core gene expression were quantified using Stringtie. To describe the tissue specificity of a gene in different tissues, an entropy-based method was used to calculate tissue-specific score S as follows: $$S = H_{\mathrm{max}} - H_{\mathrm{obs}} = \mathrm{log}_{2}N - \left( { - \mathop {\sum }\limits_{i = 1}^{N} P_{i} \times \mathrm{log}_{2}P_{i}} \right)$$ where Pi is the relative level in tissue i. For a gene, if the expression level of tissue i was higher than 20% of the highest expression, the gene was considered expressed in tissue i. Alternative TSS and TES analysis In order to obtain more comprehensive variations of UTR regions caused by alternative TSS/TES usage, the TSS and TES cluster with average TPM > 0.5 in 16 tissues were used in this part of analysis. The TSS clusters with TPM > 0.5 in at least two tissues were used in a published protocol59. Here, the same threshold was used, but to obtain more credible results, an average expression value in all samples was required to pass the filter. For any two clusters within 50 nt with each other, only the one with larger signal was used for subsequent analysis. For a gene with multiple TSSs, the dominated TSS in one tissue was defined as the TSS having more than 50% CAGE-seq signal across the gene. In analyzing differential TSS usage across multiple tissues, a dynamic TSS switch gene was identified by the two criteria: it has two or more dominant TSSs in any two tissues and the TSS switch score is >0.3. The TSS switch score W for tissue i and tissue j was defined as: $$W_{i,j} = \left\{ {\begin{array}{*{20}{l}} {P_{i,d\left( i \right)} + P_{j,d\left( j \right)} - 1,} \hfill & {{\mathrm{if}}\;d\left( i \right) \ne d\left( j \right)} \hfill \\ {0,} \hfill & {{\mathrm{else}}} \hfill \end{array}} \right.$$ where d(i) is the dominant TSS site in tissue i, Pi,s is the CAGE-seq signal usage of site s in tissue i. To investigate the pattern of 3′UTR usage, the weighted 3′UTR length metric was calculated as in published study60. In addition, the genes with an alternative 3′UTR between two samples were defined by a change of weighted 3′UTR length >100 nt and TES expression with TPM > 5 in both samples. For alternative TSS and TES function analysis, the sequences of alternative 5′UTRs and 3′UTRs were extracted to perform the following analyses: miRNA target site prediction using psRNATarget61, RG4 motif scan with qgrs in which a motif with more than three G4 planes and a link sequence length greater than one ribonucleotide was accepted, U-rich motif search using Biopython, uORF search using ORFfinder. The 3D structures of long and short isoforms of NRT1.2 were built using SWISS-MODEL62 homology modeling and visualized with pymol (v2.2 at https://www.pymol.org). AS analysis The AS events were identified using rMATS (v3.2.5). For each transcript, the CDS sequence was extracted and translated to predict its functional domains. The conserved domain and Pfam domain were searched using rpsblast (v2.2.31+) in the CDD of NCBI and PfamScan (v1.6) of EBI, respectively. The Slim region, disorder region, signal peptide, and TM helices were predicted using iupred (v1.0), VSL2, and anchor (v1.0), signalP (v4.1), and TMHMM (v2.0c), respectively. Subcellular localization with a score > 0.9 in TargetP (v1.1) was chosen. GO term enrichment was analyzed using DAVID web server (v6.8)63. AS hotspot model A statistical model was designed to predict the AS hotspot phenomenon. In this model, one gene was divided into several segments based on the exon boundaries. For each isoform of one gene, a string of ones and zeros can be produced to represent its structure, if "1" is defined as the appearance of a segment, "0" as the absence of a segment (Supplementary Fig. 10e). Based on the string matrix of multiple isoforms for all genes, whether there was AS hotspot in a genome could be estimated by counting the types of substrings (Supplementary Fig. 10f) of annotated isoforms for statistical test. If there was no AS hotspot, the null hypothesis was that AS events were independent of each other and the probability for the number of isoform substrings with given length followed the Markov property. Under this hypothesis, the number of k-length isoform substring was subject to the distribution X: $$P\left( {X\left( {\xi = n{\mathrm{|}}K = 1} \right)} \right) = \left\{ {\begin{array}{*{20}{l}} {p,} \hfill & {{\mathrm{if}}\;n = 2} \hfill \\ {1 - p,} \hfill & {{\mathrm{if}}\;n = 1,} \hfill \\ {0,} \hfill & {{\mathrm{else}}} \hfill \end{array}} \right.$$ $$P\left( {X\left( {\xi = n{\mathrm{|}}K = k + 1} \right)} \right) = \mathop {\sum }\limits_{i = \frac{n}{2}}^n C_i^{n - i}p^{n - i}(1 - p)^{2i - n}P\left( {X\left( {\xi = i{\mathrm{|}}K = k} \right)} \right)$$ where \(\xi\) was path number variable, K was the substring length variable. The expectation of X and the probability p satisfied \(E(X) = (1 + p)^K\), which could be used to estimate p and the distribution X. The presence of AS hotspot in a genome was called when the observed probability was three-fold larger than the expected distribution under the null hypothesis. The minimum number of isoform substrings for that difference was defined as the cutoff for identifying AS hotspot region. The above-described model was used to scan the genomes of G. arboreum, A. thaliana, H. sapiens, and M. musculus with k-mer from 3 to 10 (Supplementary Fig. 10g and Fig. 4h). To reduce the noise, only the AS hotspots supported by more than three evidences were used for further analysis. Polycistron analysis Polycistron was defined as a transcript containing more than one major CDS region. GO similarity was computed using GOSemSim (3.4.1). KO pathway identification with KEGG software was used for pathway analysis. The protein–protein interaction network obtained based on homology alignment of the STRING database (Arabidopsis thaliana v10.5) was used for protein interaction analysis. To examine the co-linearity of polycistrons in other plant genomes, a CDS pair in cotton polycistron was defined to be co-linear in another species if their orthologs were located on the same strand of the same chromosome and the distance between them was less than 20 Kb. Since the genome G. raimondii had close relationship to our G. arboreum genome, we used blastp (e-value <1e-6) to map the orthologs, while for other plant species the orthologs were obtained from the PLAZA dicots database (v4.0 at https://bioinformatics.psb.ugent.be/plaza). PCR validation Total RNA was treated with DNase I (Fermentas) to digest genomic DNA and then was reverse-transcribed to cDNA using SuperScript III (Life Technologies) with oligo dT or random primers. For validating the JS errors, the pooled cDNA sample was used and primers were designed to span the junctions using Primer 5 software. The PCR products were gel-purified and Sanger-sequenced. For validating the tissue-specific expression of genes, the SYBR Green quantitative real-time PCR (qPCR) analysis was used. The housekeeping genes predicted based on the transcriptome across 16 tissues were validated and selected as internal controls for qPCR. For TSS and TES validation assays, 5′ and 3′RACE strategies were applied for cDNA synthesis and reverse transcription PCR (RT-PCR) amplification, respectively. All primers used in this study are listed in Supplementary Data 4, 5, 11, 14, 31 and 36. Nitrate uptake assay The stably transfected HEK293T cell lines with pMSCV-puro plasmid expressing the long and short versions of coding sequence Ga12g01154 (NRT1.2): NRT-L and NRT-S respectively, and pMSCV-puro empty plasmid as control were cultured in DMEM medium. For measuring NO3- uptake activity, the isotope 15NO3- (10 mM) was added to the medium. The uptake was terminated by removing the uptake solution after 2 and 8 h, respectively, and followed by washing with ice-cold PBS twice. Then, the cells were lyophilized and transferred for LC-MS/MS analysis. The transport activity of NO3- was calculated by the ratio of isotope nitrate uptake to cell dry weight. EMSA assay The AP2 domains of AP2-I (235–318 aa) and AP2-S (235–303 aa) were expressed with C-terminal 6xHis tag in E.coli BL21 and purified with an Ni-NTA column. The 58 nt DNA probe containing the binding motif (ATGTCGAT) of APETALA (AP2) orthologue in Arabidopsis64 was used to test the binding activity. The binding reaction (20 μL) consisted of 0.5 μg protein, 2 μL 10x binding buffer (100 mM Tris-HCl, 0.5 M KCl, 10 mM DTT, pH 7.5), 1 μL 50% glycerol, 1 μL 1% NP40, 1 μL 1 M KCl, 4 μL 25 mM MgCl2, 0.1 μg Poly(dI-dC), 5 pmol FAM-labeled (hot) probe, ±250 pmol unlabeled (cold) probe. The reaction was incubated at 25 °C for 60 min and electrophoresed on 8% native PAGE gels. Finally, FAM fluorescence signals were detected using the Bio-Rad ChemiDoc MP Imaging system. Probes are listed in Supplementary Data 31. According to the DIG RNA labeling kit protocol (Roche), T7 RNA polymerase mediated in vitro transcription was used to produce digoxigenin-labeled sense and anti-sense probes. The cotton ovules (0, 3, and 5 DPA) were fixed with 4% paraformaldehyde, dehydrated with ethanol series ranging from 15% to 100%, and embedded in paraffin. Next, 14 μm sections were prepared and hybridized with probes. The hybridization signals were produced with anti-digoxigenin-AP and NBT/BCIP kit (Roche) and detected under light microscope. Primers are listed in Supplementary Data 11. SNP and GWAS data SNP data for G. arboreum genome were obtained from a previous study7. The GWAS sites were obtained from the studies in G. arboreum7 and G. hirsutum65,66,67,68,69,70. For the GWAS data in G. hirsutum, only the GWAS sites in At-subgenome were used and mapped to G. arboreum genome (AA). For a GWAS site ST in At, the following procedure was implemented to locate the corresponding SA in AA genome. The sequence ±1000 bp flanking ST was mapped to AA genome using STARlong, and the uniquely mapped hit was considered potential region containing SA. The sequence ±200 bp flanking ST was mapped to the region using BLAT, and the corresponding position to ST in the alignment was considered SA if the alignment did not have mismatch ±50 bp flanking SA and did not have any indel. The number and frequency of SNPs and GWAS sites in intervals were computed using BEDTools (v2.26.0). SNP profile around certain intervals was computed with 5-bp smoothing window. The majority of statistical tests were conducted using Wilcoxon rank sum test with continuity correction (two-tailed). The pathway analysis and protein–protein-interaction in polycistron analysis was one-tailed Wilcoxon rank sum test, and analysis of the activity difference between NRT-L and NRT-S was a two-tailed t-test. The significance levels are indicated by asterisks (*P < 0.05, **P < 0.01, ***P < 0.001). The numbers of TSSs in CAGE signal usage analysis were n = 5096/8659/6723 for CT/PT/DT, respectively. In addition, the numbers of TESs in polyA signal usage analysis were n = 870/9180/9493 for CT/PT/DT, respectively. For the statistics of 3′UTR length, 5460 genes after removal of polycistrons were used. In length comparison analysis for AS hotspot genes, 135 IGIA cotton genes and 142 A. thaliana genes were used. For all the polycistron analyses, the sizes of bi-cistrons and tri-cistrons were 1081 and 34, respectively. The random control sets had matched sizes. In all boxplot plots, the 5%, 25%, 75 and 95% quantiles are shown, and the center line represents the median value. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data supporting the findings of this work are available within the paper and its Supplementary Information files. A reporting summary for this Article is available as a Supplementary Information file. The datasets generated and analyzed during the current study are available from the corresponding author upon request. The sequencing data from this study have been submitted to the NCBI Sequence Read Archive under the accession PRJNA507565. The IGIA gene annotation and all RNA signal tracks generated in this study are freely available in the customized genome browser at http://cotton.whu.edu.cn/igia. The source data underlying Figs. 1e, 2b–d, 2f, 2h, 3b–h, 4e–h, 5b, and 5f–i, as well as Supplementary Figs. 1b, 1d, 2b–f, 4, 5a, 5e, 6d, 6e, 7, 8, 9, 10b, 10g, 11, and 12 are provided as a Source Data file. The IGIA package is available from GitHub (https://github.com/zhouyulab/igia/). All the softwares used in this study are listed in Supplementary Data39. Wang, K. et al. The draft genome of a diploid cotton Gossypium raimondii. Nat. Genet. 44, 1098–1103 (2012). Li, F. et al. Genome sequence of cultivated upland cotton (Gossypium hirsutum TM-1) provides insights into genome evolution. Nat. Biotechnol. 33, 524–530 (2015). Wang, M. et al. Reference genome sequences of two cultivated allotetraploid cottons, Gossypium hirsutum and Gossypium barbadense. Nat. Genet. 51, 224–229 (2019). Hu, Y. et al. Gossypium barbadense and Gossypium hirsutum genomes provide insights into the origin and evolution of allotetraploid cotton. Nat. Genet. 51, 739–748 (2019). Paterson, A. H. et al. Repeated polyploidization of Gossypium genomes and the evolution of spinnable cotton fibres. Nature 492, 423–427 (2012). Li, F. et al. Genome sequence of the cultivated cotton Gossypium arboreum. Nat. Genet. 46, 567–572 (2014). Du, X. et al. Resequencing of 243 diploid cotton accessions based on an updated A genome identifies the genetic basis of key agronomic traits. Nat. Genet. 50, 796–802 (2018). Parekh, M. J., Kumar, S., Fougat, R. S., Zala, H. N. & Pandit, R. J. Transcriptomic profiling of developing fiber in levant cotton (Gossypium herbaceum L.). Funct. Integr. Genomics 18, 211–223 (2018). Baralle, F. E. & Giudice, J. Alternative splicing as a regulator of development and tissue identity. Nat. Rev. Mol. Cell Biol. 18, 437–451 (2017). Laloum, T., Martín, G. & Duque, P. Alternative splicing control of abiotic stress responses. Trends Plant Sci. 23, 140–150 (2018). Ushijima, T. et al. Light controls protein localization through phytochrome-mediated alternative promoter selection. Cell 171, 1316–1325 (2017). Marquardt, S. et al. Functional consequences of splicing of the antisense transcript COOLAIR on FLC transcription. Mol. Cell 54, 156–165 (2014). Yan, K. et al. Stress-induced alternative splicing provides a mechanism for the regulation of microRNA processing in Arabidopsis thaliana. Mol. Cell 48, 521–531 (2012). Zhang, Y. et al. Integrative genome-wide analysis reveals HLP1, a novel RNA-binding protein, regulates plant flowering by targeting alternative polyadenylation. Cell Res. 25, 864–876 (2015). Yu, J. et al. CottonGen: a genomics, genetics and breeding database for cotton research. Nucleic Acids Res. 42, D1229–D1236 (2014). Steijger, T. et al. Assessment of transcript reconstruction methods for RNA-seq. Nat. Methods 10, 1177–1184 (2013). Sharon, D., Tilgner, H., Grubert, F. & Snyder, M. A single-molecule long-read survey of the human transcriptome. Nat. Biotechnol. 31, 1009–1014 (2013). Wang, B. et al. Unveiling the complexity of the maize transcriptome by single-molecule long-read sequencing. Nat. Commun. 7, 11708 (2016). Abdel-Ghany, S. E. et al. A survey of the sorghum transcriptome using single-molecule long reads. Nat. Commun. 7, 11706 (2016). Wang, M. et al. A global survey of alternative splicing in allopolyploid cotton: landscape, complexity and regulation. New Phytol. 217, 163–178 (2018). Rhoads, A. & Au, K. F. Pacbio sequencing and its applications. Genomics Proteom. Bioinforma. 13, 278–289 (2015). Au, K. F. et al. Characterization of the human ESC transcriptome by hybrid sequencing. Proc. Natl Acad. Sci. USA 110, E4821–E4830 (2013). Batut, P. & Gingeras, T. R. RAMPAGE: promoter activity profiling by paired-end sequencing of 5′-complete cDNAs. in Current Protocols in Molecular Biology 25B.11.1–25B.11.16 (John Wiley & Sons, Inc., 2013). https://doi.org/10.1002/0471142727.mb25b11s104 Hoque, M. et al. Analysis of alternative cleavage and polyadenylation by 3′region extraction and deep sequencing. Nat. Methods 10, 133–139 (2013). Boley, N. et al. Genome-guided transcript assembly by integrative analysis of RNA sequence data. Nat. Biotechnol. 32, 341–346 (2014). Niknafs, Y. S., Pandian, B., Iyer, H. K., Chinnaiyan, A. M. & Iyer, M. K. TACO produces robust multisample transcriptome assemblies from RNA-seq. Nat. Methods 14, 68–70 (2017). Gordon, S. P. et al. Widespread polycistronic transcripts in fungi revealed by single-molecule mRNA sequencing. PLoS ONE 10, e0132628 (2015). Wang, K., Huang, G. & Zhu, Y. Transposable elements play an important role during cotton genome evolution and fiber cell development. Sci. China Life Sci. 59, 112–121 (2016). Frith, M. C. et al. A code for transcription initiation in mammalian genomes. Genome Res. 18, 1–12 (2007). Morton, T. et al. Paired-end analysis of transcription start sites in Arabidopsis reveals plant-specific promoter signatures. Plant Cell 26, 2746–2760 (2014). Leppek, K., Das, R. & Barna, M. Functional 5′UTR mRNA structures in eukaryotic translation regulation and how to find them. Nat. Rev. Mol. Cell Biol. 19, 158–174 (2017). Tokizawa, M. et al. Identification of Arabidopsis genic and non-genic promoters by paired-end sequencing of TSS tags. Plant J. 90, 587–605 (2017). Mayr, C. Evolution and biological roles of alternative 3′UTRs. Trends Cell Biol. 26, 227–237 (2016). Sherstnev, A. et al. Direct sequencing of Arabidopsis thaliana RNA reveals patterns of cleavage and polyadenylation. Nat. Struct. Mol. Biol. 19, 845–852 (2012). Fu, H. et al. Genome-wide dynamics of alternative polyadenylation in rice. Genome Res. 26, 1753–1760 (2016). Graber, J. H., Cantor, C. R., Mohr, S. C. & Smith, T. F. In silico detection of control signals: mRNA 3′-end-processing sequences in diverse species. Proc. Natl Acad. Sci. USA 96, 14055–14060 (1999). Wu, X. et al. Genome-wide landscape of polyadenylation in Arabidopsis provides evidence for extensive alternative polyadenylation. Proc. Natl Acad. Sci. USA 108, 12533–12538 (2011). Li, W. et al. Alternative cleavage and polyadenylation in spermatogenesis connects chromatin regulation with post-transcriptional control. BMC Biol. 14, 6 (2016). West, S. M. et al. Developmental dynamics of gene expression and alternative polyadenylation in the Caenorhabditis elegans germline. Genome Biol. 19, 8 (2018). Bhate, M. P., Molnar, K. S., Goulian, M. & DeGrado, W. F. Signal transduction in histidine kinases: insights from new structures. Structure 23, 981–994 (2015). Ahmad, K. F., Engel, C. K. & Privé, G. G. Crystal structure of the BTB domain from PLZF. Proc. Natl Acad. Sci. USA 95, 12123–12128 (1998). Li, Q., Xiao, G. & Zhu, Y.-X. Single-nucleotide resolution mapping of the Gossypium raimondii transcriptome reveals a new mechanism for alternative splicing of introns. Mol. Plant 7, 829–840 (2014). Li, Y. I., Sanchez-Pulido, L., Haerty, W. & Ponting, C. P. RBFOX and PTBP1 proteins regulate the alternative splicing of micro-exons in human brain transcripts. Genome Res. 25, 1–13 (2015). García-Ríos, M. et al. Cloning of a polycistronic cDNA from tomato encoding gamma-glutamyl kinase and gamma-glutamyl phosphate reductase. Proc. Natl Acad. Sci. USA 94, 8249–8254 (1997). Fukudome, A., Sun, D., Zhang, X. & Koiwa, H. Salt stress and CTD PHOSPHATASE-LIKE 4 mediate the switch between production of small nuclear RNAs and mRNAs. Plant Cell 29, 3214–3233 (2017). Shearwin, K. E., Callen, B. P. & Egan, J. B. Transcriptional interference – a crash course. Trends Genet. 21, 339–345 (2005). Davuluri, R. V., Suzuki, Y., Sugano, S., Plass, C. & Huang, T. H.-M. The functional consequences of alternative promoter use in mammalian genomes. Trends Genet. 24, 167–177 (2008). Mejía-Guerra, M. K. et al. Core promoter plasticity between maize tissues and genotypes contrasts with predominance of sharp transcription initiation sites. Plant Cell 27, 3309–3320 (2015). Jiang, J., Zhang, C. & Wang, X. A recently evolved isoform of the transcription factor BES1 promotes brassinosteroid signaling and development in Arabidopsis thaliana. Plant Cell 27, 361–374 (2015). Ma, W. & Mayr, C. A membraneless organelle associated with the endoplasmic reticulum enables 3′UTR-mediated protein-protein interactions. Cell 175, 1492–1506 (2018). Li, W. et al. EIN2-directed translational regulation of ethylene signaling in Arabidopsis. Cell 163, 670–683 (2015). Hayashi, N. et al. Identification of Arabidopsis thaliana upstream open reading frames encoding peptide sequences that cause ribosomal arrest. Nucleic Acids Res. 45, 8844–8858 (2017). Kurihara, Y. et al. Transcripts from downstream alternative transcription start sites evade uORF-mediated inhibition of gene expression in Arabidopsis. Proc. Natl Acad. Sci. USA 115, 7831–7836 (2018). Ji, S. J. et al. Isolation and analyses of genes preferentially expressed during early cotton fiber development by subtractive PCR and cDNA array. Nucleic Acids Res. 31, 2534–2543 (2003). Parkhomchuk, D. et al. Transcriptome analysis by strand-specific sequencing of complementary DNA. Nucleic Acids Res. 37, e123 (2009). Zheng, D., Liu, X. & Tian, B. 3′READS+, a sensitive and accurate method for 3′ end sequencing of polyadenylated RNA. RNA 22, 1631–1639 (2016). Moriya, Y., Itoh, M., Okuda, S., Yoshizawa, A. C. & Kanehisa, M. KAAS: an automatic genome annotation and pathway reconstruction server. Nucleic Acids Res. 35, W182–W185 (2007). Szklarczyk, D. et al. The STRING database in 2017: quality-controlled protein-protein association networks, made broadly accessible. Nucleic Acids Res. 45, D362–D368 (2017). Fort, A. & Fish, R. J. Deep cap analysis of gene expression (CAGE): genome-wide identification of promoters, quantification of their activity, and transcriptional network inference. Methods Mol. Biol. 1543, 111–126 (2017). Sanfilippo, P., Wen, J. & Lai, E. C. Landscape and evolution of tissue-specific alternative polyadenylation across Drosophila species. Genome Biol. 18, 229 (2017). Dai, X. & Zhao, P. X. psRNATarget: a plant small RNA target analysis server. Nucleic Acids Res. 39, W155–W159 (2011). Waterhouse, A. et al. SWISS-MODEL: homology modelling of protein structures and complexes. Nucleic Acids Res. 46, W296–W303 (2018). Huang, D. W., Sherman, B. T. & Lempicki, R. A. Bioinformatics enrichment tools: paths toward the comprehensive functional analysis of large gene lists. Nucleic Acids Res. 37, 1–13 (2009). Lehti-Shiu, M. D. et al. Molecular evidence for functional divergence and decay of a transcription factor derived from whole-genome duplication in Arabidopsis thaliana. Plant Physiol. 168, 1717–1734 (2015). Ma, Z. et al. Resequencing a core collection of upland cotton identifies genomic variation and loci influencing fiber quality and yield. Nat. Genet. 50, 803–813 (2018). Wang, M. et al. Asymmetric subgenome selection and cis-regulatory divergence during cotton domestication. Nat. Genet. 49, 579–587 (2017). Li, T. et al. Genome-wide association study discovered candidate genes of Verticillium wilt resistance in upland cotton (Gossypium hirsutum L.). Plant Biotechnol. J. 15, 1520–1532 (2017). Sun, Z. et al. Genome-wide association study discovered genetic variation and candidate genes of fibre quality traits in Gossypium hirsutum L. Plant Biotechnol. J. 15, 982–996 (2017). Su, J. et al. Identification of favorable SNP alleles and candidate genes for traits related to early maturity via GWAS in upland cotton. BMC Genomics 17, 687 (2016). Li, C., Fu, Y., Sun, R., Wang, Y. & Wang, Q. Single-locus and multi-locus genome-wide association studies in the genetic dissection of fiber quality traits in upland cotton (Gossypium hirsutum L.). Front. Plant Sci. 9, 1083 (2018). This work was supported by the grants from the National Natural Science Foundation of China (31690090) to Y.X.Z.; National Program on Research and Development of Transgenic Plants (2016ZX08009003-004) and National Natural Science Foundation of China (31770310 and 31711530706) to K.W.; National Key R&D Program of China (2017YFA0504400) and National Natural Science Foundation of China (91640115 and 31670827) to Y.Z.; Innovation Team Program from Wuhan University to Y.Z., Y.X.Z., and K.W. (2042017kf0233). Part of computation in this work was done on the supercomputing system in the Supercomputing Center of Wuhan University. These authors contributed equally: Kun Wang, Dehe Wang. College of Life Sciences, Wuhan University, 430072, Wuhan, Hubei, China Kun Wang, Dehe Wang, Xiaomin Zheng, Ai Qin, Jie Zhou, Boyu Guo, Yanjun Chen, Yu Zhou & Yuxian Zhu State Key Laboratory of Virology, Wuhan University, 430072, Wuhan, Hubei, China Dehe Wang & Yu Zhou Institute for Advanced Studies, Wuhan University, 430072, Wuhan, Hubei, China Xingpeng Wen, Yu Zhou & Yuxian Zhu Medical Research Institute, School of Medicine, Wuhan University, 430072, Wuhan, Hubei, China Wen Ye Kun Wang Dehe Wang Xiaomin Zheng Ai Qin Jie Zhou Boyu Guo Yanjun Chen Xingpeng Wen Yu Zhou Yuxian Zhu Y.X.Z., Y.Z., and K.W. conceived the study. K.W., D.H.W., J.Z., and Y.Z. performed the analysis of the sequencing data. X.M.Z. and X.P.W. constructed CAGE-seq, ssRNA-seq, and PolyA-seq libraries, and did RT-PCR validations and other functional assays with the help of A.Q., B.Y.G., and W.Y., X.M.Z., W.Y., and A.Q. contributed the collection of primary samples. Y.J.C. contributed the primer design for different PCR assays. Y.Z., K.W., Y.X.Z., and D.H.W. wrote the manuscript. All authors discussed the results and approved the paper. Correspondence to Yu Zhou or Yuxian Zhu. Peer review information Nature Communications thanks Yan Pei, Zhong Wang and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Description of Additional Supplementary Files Supplementary Dataset 1-39 Wang, K., Wang, D., Zheng, X. et al. Multi-strategic RNA-seq analysis reveals a high-resolution transcriptional landscape in cotton. Nat Commun 10, 4714 (2019). https://doi.org/10.1038/s41467-019-12575-x Received: 05 February 2019 Establishment of an efficient cotton root protoplast isolation protocol suitable for single-cell RNA sequencing and transient gene expression analysis Ke Zhang Shanhe Liu Jun Li Plant Methods (2023) Integrative physiological and transcriptome analyses provide insights into the Cadmium (Cd) tolerance of a Cd accumulator: Erigeron canadensis Chenchen Gan Zhaochao Liu Yin Yi BMC Genomics (2022) Huihui Yu Mu Li Chi Zhang Intergenic splicing-stimulated transcriptional readthrough is suppressed by nonsense-mediated mRNA decay in Arabidopsis Yukio Kurihara Yuko Makita Minami Matsui Communications Biology (2022) Molecular studies of cellulose synthase supercomplex from cotton fiber reveal its unique biochemical properties Yufeng Zhai Science China Life Sciences (2022)
CommonCrawl
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) Three sub-Jupiter-mass planets: WASP-69b & WASP-84b transit active K dwarfs and WASP-70Ab transits the evolved primary of a G4+K3 binary (1310.5654) D. R. Anderson, A. Collier Cameron, L. Delrez, A. P. Doyle, F. Faedi, A. Fumel, M. Gillon, Y. Gómez Maqueo Chew, C. Hellier, E. Jehin, M. Lendl, P. F. L. Maxted, F. Pepe, D. Pollacco, D. Queloz, D. Ségransan, I. Skillen, B. Smalley, A. M. S. Smith, J. Southworth, A. H. M. J. Triaud, O. D. Turner, S. Udry, R. G. West Feb. 8, 2018 astro-ph.EP We report the discovery of the transiting exoplanets WASP-69b, WASP-70Ab and WASP-84b, each of which orbits a bright star ($V\sim10)$. WASP-69b is a bloated Saturn-mass planet (0.26 $M_{\rm Jup}$, 1.06 $R_{\rm Jup}$) in a 3.868-d period around an active, $\sim$1-Gyr, mid-K dwarf. ROSAT detected X-rays $60 \pm 27"$ from WASP-69. If the star is the source then the planet could be undergoing mass-loss at a rate of $\sim$10$^{12}$ g s$^{-1}$. This is 1 to 2 orders of magnitude higher than the evaporation rate estimated for HD 209458b and HD 189733b, both of which have exhibited anomalously-large Lyman-$\alpha$ absorption during transit. WASP-70Ab is a sub-Jupiter-mass planet (0.59 $M_{\rm Jup}$, 1.16 $R_{\rm Jup}$) in a 3.713-d orbit around the primary of a spatially-resolved, 9-to-10-Gyr, G4+K3 binary, with a separation of 3.3$'$ ($\geq$800 AU). WASP-84b is a sub-Jupiter-mass planet (0.69 $M_{\rm Jup}$, 0.94 $R_{\rm Jup)}$ in an 8.523-d orbit around an active, $\sim$1-Gyr, early-K dwarf. Of the transiting planets discovered from the ground to date, WASP-84b has the third-longest period. For the active stars WASP-69 and WASP-84, we pre-whitened the radial velocities using a low-order harmonic series. We found this reduced the residual scatter more than did the oft-used method of pre-whitening with a fit between residual radial velocity and bisector span. The system parameters were essentially unaffected by pre-whitening. Six newly-discovered hot Jupiters transiting F/G stars: WASP-87b, WASP-108b, WASP-109b, WASP-110b, WASP-111b & WASP-112b (1410.3449) D. R. Anderson, D. J. A. Brown, A. Collier Cameron, L. Delrez, A. Fumel, M. Gillon, C. Hellier, E. Jehin, M. Lendl, P. F. L. Maxted, M. Neveu-VanMalle, F. Pepe, D. Pollacco, D. Queloz, P. Rojo, D. Segransan, A. M. Serenelli, B. Smalley, A. M. S. Smith, J. Southworth, A. H. M. J. Triaud, O. D. Turner, S. Udry, R. G. West Oct. 14, 2014 astro-ph.EP We present the discoveries of six transiting hot Jupiters: WASP-87b, WASP-108b, WASP-109b, WASP-110b, WASP-111b and WASP-112b. The planets have masses of 0.51--2.2 $M_{\rm Jup}$ and radii of 1.19--1.44 $R_{\rm Jup}$ and are in orbits of 1.68--3.78 d around stars with masses 0.81--1.50 $M_{\rm \odot}$. WASP-111b is in a prograde, near-aligned ($\lambda = -5 \pm 16^\circ$), near-circular ($e < 0.10$ at 2 $\sigma$) orbit around a mid-F star. As tidal alignment around such a hot star is thought to be inefficient, this suggests that either the planet migrated inwards through the protoplanetary disc or that scattering processes happened to leave it in a near-aligned orbit. WASP-111 appears to have transitioned from an active to a quiescent state between the 2012 and 2013 seasons, which makes the system a candidate for studying the effects of variable activity on a hot-Jupiter atmosphere. We find evidence that the mid-F star WASP-87 is a visual binary with a mid-G star. Two host stars are metal poor: WASP-112 has [Fe/H] = $-0.64 \pm 0.15$ and WASP-87 has [Fe/H] = $-0.41 \pm 0.10$. The low density of WASP-112 (0.81 $M_{\rm \odot}$, $0.80 \pm 0.04$ $\rho_{\rm \odot}$) cannot be matched by standard models for any reasonable value of the age of the star, suggesting it to be affected by the "radius anomaly". A Window on Exoplanet Dynamical Histories: Rossiter-McLaughlin Observations of WASP-13b and WASP-32b (1403.4095) R. D. Brothwell, C. A. Watson, G. Hebrard, A. H. M. J. Triaud, H. M. Cegla, A. Santerne, E. Hebrard, D. R. Anderson, D. Pollacco, E. K. Simpson, F. Bouchy, D. J. A. Brown, Y. Gomez Maqueo Chew, A. Collier Cameron, D. J. Armstrong, S. C. C. Barros, J. Bento, J. Bochinski, V. Burwitz, R. Busuttil, L. Delrez, A. P. Doyle, F. Faedi, A. Fumel, M. Gillon, C. A. Haswell, C. Hellier, E. Jehin, U. Kolb, M. Lendl, C. Liebig, P. F. L. Maxted, J. McCormac, G. R. M. Miller, A. J. Norton, F. Pepe, D. Queloz, J. Rodriguez, D. Segransan, I. Skillen, B. Smalley, K. G. Stassun, S. Udry, R. G. West, P. J. Wheatley March 17, 2014 astro-ph.EP We present Rossiter-McLaughlin observations of WASP-13b and WASP-32b and determine the sky-projected angle between the normal of the planetary orbit and the stellar rotation axis ($\lambda$). WASP-13b and WASP-32b both have prograde orbits and are consistent with alignment with measured sky-projected angles of $\lambda={8^{\circ}}^{+13}_{-12}$ and $\lambda={-2^{\circ}}^{+17}_{-19}$, respectively. Both WASP-13 and WASP-32 have $T_{\mathrm{eff}}<6250$K and therefore these systems support the general trend that aligned planetary systems are preferentially found orbiting cool host stars. A Lomb-Scargle periodogram analysis was carried out on archival SuperWASP data for both systems. A statistically significant stellar rotation period detection (above 99.9\% confidence) was identified for the WASP-32 system with $P_{\mathrm{rot}}=11.6 \pm 1.0 $ days. This rotation period is in agreement with the predicted stellar rotation period calculated from the stellar radius, $R_{\star}$, and $v \sin i$ if a stellar inclination of $i_{\star}=90^{\circ}$ is assumed. With the determined rotation period, the true 3D angle between the stellar rotation axis and the planetary orbit, $\psi$, was found to be $\psi=11^{\circ} \pm 14$. We conclude with a discussion on the alignment of systems around cool host stars with $T_{\mathrm{eff}}<6150$K by calculating the tidal dissipation timescale. We find that systems with short tidal dissipation timescales are preferentially aligned and systems with long tidal dissipation timescales have a broad range of obliquities. Transiting planets from WASP-South, Euler and TRAPPIST: WASP-68 b, WASP-73 b and WASP-88 b, three hot Jupiters transiting evolved solar-type stars (1312.1827) L. Delrez, V. Van Grootel, D. R. Anderson, A. Collier-Cameron, A. P. Doyle, A. Fumel, M. Gillon, C. Hellier, E. Jehin, M. Lendl, M. Neveu-VanMalle, P. F. L. Maxted, F. Pepe, D. Pollacco, D. Queloz, D. Ségransan, B. Smalley, A. M. S. Smith, J. Southworth, A. H. M. J. Triaud, S. Udry, R. G. West Dec. 6, 2013 astro-ph.EP We report the discovery by the WASP transit survey of three new hot Jupiters, WASP-68 b, WASP-73 b and WASP-88 b. WASP-68 b has a mass of 0.95+-0.03 M_Jup, a radius of 1.24-0.06+0.10 R_Jup, and orbits a V=10.7 G0-type star (1.24+-0.03 M_sun, 1.69-0.06+0.11 R_sun, T_eff=5911+-60 K) with a period of 5.084298+-0.000015 days. Its size is typical of hot Jupiters with similar masses. WASP-73 b is significantly more massive (1.88-0.06+0.07 M_Jup) and slightly larger (1.16-0.08+0.12 R_Jup) than Jupiter. It orbits a V=10.5 F9-type star (1.34-0.04+0.05 M_sun, 2.07-0.08+0.19 R_sun, T_eff=6036+-120 K) every 4.08722+-0.00022 days. Despite its high irradiation (2.3 10^9 erg s^-1 cm^-2), WASP-73 b has a high mean density (1.20-0.30+0.26 \rho_Jup) that suggests an enrichment of the planet in heavy elements. WASP-88 b is a 0.56+-0.08 M_Jup planet orbiting a V=11.4 F6-type star (1.45+-0.05 M_sun, 2.08-0.06+0.12 R_sun, T_eff=6431+-130 K) with a period of 4.954000+-0.000019 days. With a radius of 1.70-0.07+0.13 R_Jup, it joins the handful of planets with super-inflated radii. The ranges of ages we determine through stellar evolution modeling are 4.2-8.3 Gyr for WASP-68, 2.7-6.4 Gyr for WASP-73 and 1.8-5.3 Gyr for WASP-88. WASP-73 appears to be a significantly evolved star, close to or already in the subgiant phase. WASP-68 and WASP-88 are less evolved, although in an advanced stage of core H-burning. Three irradiated and bloated hot Jupiters: WASP-76b, WASP-82b & WASP-90b (1310.5607) R.G. West, J.-M. Almenara, D.R. Anderson, F. Bouchy, D. J. A. Brown, A. Collier Cameron, M. Deleuil, L. Delrez, A. P. Doyle, F. Faedi, A. Fumel, M. Gillon, G. Hebrard, C. Hellier, E. Jehin, M. Lendl, P.F.L. Maxted, F. Pepe, D. Pollacco, D. Queloz, D. Segransan, B. Smalley, A.M.S. Smith, A.H.M.J. Triaud, S. Udry We report three new transiting hot-Jupiter planets discovered from the WASP surveys combined with radial velocities from OHP/SOPHIE and Euler/CORALIE and photometry from Euler and TRAPPIST. All three planets are inflated, with radii 1.7-1.8 Rjup. All orbit hot stars, F5-F7, and all three stars have evolved, post-MS radii (1.7-2.2 Rsun). Thus the three planets, with orbits of 1.8-3.9 d, are among the most irradiated planets known. This reinforces the correlation between inflated planets and stellar irradiation. Discovery of WASP-65b and WASP-75b: Two Hot Jupiters Without Highly Inflated Radii (1307.6532) Y. Gómez Maqueo Chew, F. Faedi, D. Pollacco, D. J. A. Brown, A. P. Doyle, A. Collier Cameron, M. Gillon, M. Lendl, B. Smalley, A. H. M. J. Triaud, R. G. West, P. J. Wheatley, R. Busuttil, C. Liebig, D. R. Anderson, D. J. Armstrong, S. C. C. Barros, J. Bento, J. Bochinski, V. Burwitz, L. Delrez, B. Enoch, A. Fumel, C. A. Haswell, G. Hébrard, C. Hellier, S. Holmes, E. Jehin, U. Kolb, P. F. L. Maxted, J. McCormac, G. R. M. Miller, A. J. Norton, F. Pepe, D. Queloz, J. Rodríguez, D. Ségransan, I. Skillen, K. G. Stassun, S. Udry, C. A. Watson Oct. 1, 2013 astro-ph.SR, astro-ph.EP We report the discovery of two transiting hot Jupiters, WASP-65b (M_pl = 1.55 +/- 0.16 M_J; R_pl = 1.11 +/- 0.06 R_J), and WASP-75b (M_pl = 1.07 +/- 0.05 M_J; R_pl = 1.27 +/- 0.05 R_J). They orbit their host star every 2.311, and 2.484 days, respectively. The planet host WASP-65 is a G6 star (T_eff = 5600 K, [Fe/H] = -0.07 +/- 0.07, age > 8 Gyr); WASP-75 is an F9 star (T_eff = 6100 K, [Fe/H] = 0.07 +/- 0.09, age of 3 Gyr). WASP-65b is one of the densest known exoplanets in the mass range 0.1 and 2.0 M_J (rho_pl = 1.13 +/- 0.08 rho_J), a mass range where a large fraction of planets are found to be inflated with respect to theoretical planet models. WASP-65b is one of only a handful of planets with masses of around 1.5 M_J, a mass regime surprisingly underrepresented among the currently known hot Jupiters. The radius of Jupiter-mass WASP-75b is slightly inflated (< 10%) as compared to theoretical planet models with no core, and has a density similar to that of Saturn (rho_pl = 0.52 +/- 0.06 rho_J). A photometric study of the hot exoplanet WASP-19b (1212.3553) M. Lendl, M. Gillon, D. Queloz, R. Alonso, A. Fumel, E. Jehin, D. Naef March 27, 2013 astro-ph.SR, astro-ph.EP Context: When the planet transits its host star, it is possible to measure the planetary radius and (with radial velocity data) the planet mass. For the study of planetary atmospheres, it is essential to obtain transit and occultation measurements at multiple wavelengths. Aims: We aim to characterize the transiting hot Jupiter WASP-19b by deriving accurate and precise planetary parameters from a dedicated observing campaign of transits and occultations. Methods: We have obtained a total of 14 transit lightcurves in the r'-Gunn, IC, z'-Gunn and I+z' filters and 10 occultation lightcurves in z'-Gunn using EulerCam on the Euler-Swiss telescope and TRAPPIST. We have also obtained one lightcurve through the narrow-band NB1190 filter of HAWK-I on the VLT measuring an occultation at 1.19 micron. We have performed a global MCMC analysis of all new data together with some archive data in order to refine the planetary parameters and measure the occultation depths in z'-band and at 1.19 micron. Results: We measure a planetary radius of R_p = 1.376 (+/-0.046) R_j, a planetary mass of M_p = 1.165 (+/-0.068) M_j, and find a very low eccentricity of e = 0.0077 (+/-0.0068), compatible with a circular orbit. We have detected the z'-band occultation at 3 sigma significance and measure it to be dF_z'= 352 (+/-116) ppm, more than a factor of 2 smaller than previously published. The occultation at 1.19 micron is only marginally constrained at dF_1190 = 1711 (+/-745) ppm. Conclusions: We have shown that the detection of occultations in the visible is within reach even for 1m class telescopes if a considerable number of individual events are observed. Our results suggest an oxygen-dominated atmosphere of WASP-19b, making the planet an interesting test case for oxygen-rich planets without temperature inversion. WASP-71b: a bloated hot Jupiter in a 2.9-day, prograde orbit around an evolved F8 star (1211.3045) A. M. S. Smith, D. R. Anderson, F. Bouchy, A. Collier Cameron, A. P. Doyle, A. Fumel, M. Gillon, G. Hébrard, C. Hellier, E. Jehin, M. Lendl, P. F. L. Maxted, C. Moutou, F. Pepe, D. Pollacco, D. Queloz, A. Santerne, D. Segransan, B. Smalley, J. Southworth, A. H. M . J. Triaud, S. Udry, R. G. West We report the discovery by the WASP transit survey of a highly-irradiated, massive (2.242 +/- 0.080 MJup) planet which transits a bright (V = 10.6), evolved F8 star every 2.9 days. The planet, WASP-71b, is larger than Jupiter (1.46 +/- 0.13 RJup), but less dense (0.71 +/- 0.16 {\rho}Jup). We also report spectroscopic observations made during transit with the CORALIE spectrograph, which allow us to make a highly-significant detection of the Rossiter-McLaughlin effect. We determine the sky-projected angle between the stellar-spin and planetary-orbit axes to be {\lambda} = 20.1 +/- 9.7 degrees, i.e. the system is 'aligned', according to the widely-used alignment criteria that systems are regarded as misaligned only when {\lambda} is measured to be greater than 10 degrees with 3-{\sigma} confidence. WASP-71, with an effective temperature of 6059 +/- 98 K, therefore fits the previously observed pattern that only stars hotter than 6250 K are host to planets in misaligned orbits. We emphasise, however, that {\lambda} is merely the sky-projected obliquity angle; we are unable to determine whether the stellar-spin and planetary-orbit axes are misaligned along the line-of-sight. With a mass of 1.56 +/- 0.07 Msun, WASP-71 was previously hotter than 6250 K, and therefore might have been significantly misaligned in the past. If so, the planetary orbit has been realigned, presumably through tidal interactions with the cooling star's growing convective zone. WASP-80b: a gas giant transiting a cool dwarf (1303.0254) Amaury H. M. J. Triaud, D. R. Anderson, A. Collier Cameron, A. P. Doyle, A. Fumel, M. Gillon, C. Hellier, E. Jehin, M. Lendl, C. Lovis, P. F. L. Maxted, F. Pepe, D. Pollacco, D. Queloz, D. Segransan, B. Smalley, A. M. S. Smith, S. Udry, R. G. West, P. J. Wheatley March 1, 2013 astro-ph.EP We report the discovery of a planet transiting the star WASP-80 (1SWASP J201240.26-020838.2; 2MASS J20124017-0208391; TYC 5165-481-1; BPM 80815; V=11.9, K=8.4). Our analysis shows this is a 0.55 +/- 0.04 Mjup, 0.95 +/- 0.03 Rjup gas giant on a circular 3.07 day orbit around a star with a spectral type between K7V and M0V. This system produces one of the largest transit depths so far reported, making it a worthwhile target for transmission spectroscopy. We find a large discrepancy between the v sin i inferred from stellar line broadening and the observed amplitude of the Rossiter-McLaughlin effect. This can be understood either by an orbital plane nearly perpendicular to the stellar spin or by an additional, unaccounted for source of broadening. WASP-64b and WASP-72b: two new transiting highly irradiated giant planets (1210.4257) M. Gillon, D. R. Anderson, A. Collier-Cameron, A. P. Doyle, A. Fumel, C. Hellier, E. Jehin, M. Lendl, P. F. L. Maxted, J. Montalban, F. Pepe, D. Pollacco, D. Queloz, D. Segransan, A. M. S. Smith, B. Smalley, J. Southworth, A. H. M. J. Triaud, S. Udry, R. G. West Feb. 25, 2013 astro-ph.EP We report the discovery by the WASP transit survey of two new highly irradiated giant planets transiting moderately bright stars. WASP-64b is slightly more massive (1.271+-0.068 M_Jup) and larger (1.271+-0.039 R_Jup) than Jupiter, and is in very-short (a=0.02648+-0.00024 AU, P=1.5732918+-0.0000015 days) circular orbit around a V=12.3 G7-type dwarf (1.004+-0.028 M_Sun, 1.058+-0.025 R_Sun, Teff=5500+-150 K). Its size is typical of hot Jupiters with similar masses. WASP-72b has also a mass a bit higher than Jupiter's (1.461-0.056+0.059 M_Jup) and orbits very close (0.03708+-0.00050 AU, P=2.2167421+-0.0000081 days) to a bright (V=9.6) and moderately evolved F7-type star (1.386+-0.055 M_Sun, 1.98+-0.24 R_Sun, Teff=6250+-100 K). Despite its extreme irradiation (about 5.5 10^9 erg/s/cm^2), WASP-72b has a moderate size (1.27+-0.20 R_Jup) that could suggest a significant enrichment in heavy elements. Nevertheless, the errors on its physical parameters are still too high to draw any strong inference on its internal structure or its possible peculiarity. WASP-54b, WASP-56b and WASP-57b: Three new sub-Jupiter mass planets from SuperWASP (1210.2329) F. Faedi, S. C. C. Barros, D. Brown, A. Collier Cameron, A. P. Doyle, R. Enoch, M. Gillon, Y. Gomez Maqueo Chew, G. Hebrard, M. Lendl, C. Liebig, B. Smalley, A. H. M. J. Triaud, R. G. West, P. J. Wheatley, K. A. Alsubai, D. R. Anderson, D. J. Armstrong, J. Bento, J. Bochinski, F. Bouchy, R. Busuttil, L. Fossati, A. Fumel, C. A. Haswell, C. Hellier, S. Holmes, E. Jehin, U. Kolb, J. McCormac, G. R. M. Miller, C. Moutou, A. J. Norton, N. Parley, D. Queloz, A. Santerne, I. Skillen, A. M. S. Smith, S. Udry, C. Watson Jan. 16, 2013 astro-ph.SR, astro-ph.EP We present three newly discovered sub-Jupiter mass planets from the SuperWASP survey: WASP-54b is a heavily bloated planet of mass 0.636$^{+0.025}_{-0.024}$ \mj and radius 1.653$^{+0.090}_{-0.083}$ \rj. It orbits a F9 star, evolving off the main sequence, every 3.69 days. Our MCMC fit of the system yields a slightly eccentric orbit ($e=0.067^{+0.033}_{-0.025}$) for WASP-54b. We investigated further the veracity of our detection of the eccentric orbit for WASP-54b, and we find that it could be real. However, given the brightness of WASP-54 V=10.42 magnitudes, we encourage observations of a secondary eclipse to draw robust conclusions on both the orbital eccentricity and the thermal structure of the planet. WASP-56b and WASP-57b have masses of 0.571$^{+0.034}_{-0.035}$ \mj and $0.672^{+0.049}_{-0.046}$ \mj, respectively; and radii of $1.092^{+0.035}_{-0.033}$ \rj for WASP-56b and $0.916^{+0.017}_{-0.014}$ \rj for WASP-57b. They orbit main sequence stars of spectral type G6 every 4.67 and 2.84 days, respectively. WASP-56b and WASP-57b show no radius anomaly and a high density possibly implying a large core of heavy elements; possibly as high as $\sim$50 M$_{\oplus}$ in the case of WASP-57b. However, the composition of the deep interior of exoplanets remain still undetermined. Thus, more exoplanet discoveries such as the ones presented in this paper, are needed to understand and constrain giant planets' physical properties. WASP-77 Ab: A transiting hot Jupiter planet in a wide binary system (1211.6033) P. F. L. Maxted, D. R. Anderson, A. Collier Cameron, A. P. Doyle, A. Fumel, M. Gillon, C. Hellier, E. Jehin, M. Lendl, F. Pepe, D. L. Pollacco, D. Queloz, D. Ségransan, B. Smalley, J. K. Southworth, A. M. S. Smith, A. H. M. J. Triaud, S. Udry, R. G. West Nov. 26, 2012 astro-ph.EP We report the discovery of a transiting planet with an orbital period of 1.36d orbiting the brighter component of the visual binary star BD -07 436. The host star, WASP-77A, is a moderately bright G8V star (V=10.3) with a metallicity close to solar ([Fe/H]= 0.0 +- 0.1). The companion star, WASP-77B, is a K-dwarf approximately 2 magnitudes fainter at a separation of approximately 3arcsec. The spectrum of WASP-77A shows emission in the cores of the Ca II H and K lines indicative of moderate chromospheric activity. The WASP lightcurves show photometric variability with a period of 15.3 days and an amplitude of about 0.3% that is probably due to the magnetic activity of the host star. We use an analysis of the combined photometric and spectroscopic data to derive the mass and radius of the planet (1.76+-0.06MJup, 1.21+-0.02RJup). The age of WASP-77A estimated from its rotation rate (~1 Gyr) agrees with the age estimated in a similar way for WASP-77B (~0.6 Gyr) but is in poor agreement with the age inferred by comparing its effective temperature and density to stellar models (~8 Gyr). Follow-up observations of WASP-77 Ab will make a useful contribution to our understanding of the influence of binarity and host star activity on the properties of hot Jupiters. WASP-78b and WASP-79b: Two highly-bloated hot Jupiter-mass exoplanets orbiting F-type stars in Eridanus (1206.1177) B. Smalley, D.R. Anderson, A. Collier-Cameron, A.P. Doyle, A. Fumel, M. Gillon, C. Hellier, E. Jehin, M. Lendl, P.F.L. Maxted, F. Pepe, D. Pollacco, D. Queloz, D. Segransan, A.M.S. Smith, J. Southworth, A.H.M.J. Triaud, S. Udry, R.G. West Oct. 2, 2012 astro-ph.EP We report the discovery of WASP-78b and WASP-79b, two highly-bloated Jupiter-mass exoplanets orbiting F-type host stars. WASP-78b orbits its V=12.0 host star (TYC 5889-271-1) every 2.175 days and WASP-79b orbits its V=10.1 host star (CD-30 1812) every 3.662 days. Planetary parameters have been determined using a simultaneous fit to WASP and TRAPPIST transit photometry and CORALIE radial-velocity measurements. For WASP-78b a planetary mass of 0.89 +/- 0.08 M_Jup and a radius of 1.70 +/- 0.11 R_Jup is found. The planetary equilibrium temperature of T_P = 2350 +/- 80 K for WASP-78b makes it one of the hottest of the currently known exoplanets. WASP-79b its found to have a planetary mass of 0.90 +/- 0.08 M_Jup, but with a somewhat uncertain radius due to lack of sufficient TRAPPIST photometry. The planetary radius is at least 1.70 +/- 0.11 R_Jup, but could be as large as 2.09 +/- 0.14 R_Jup, which would make WASP-79b the largest known exoplanet. The TRAPPIST survey of southern transiting planets. I. Thirty eclipses of the ultra-short period planet WASP-43 b (1201.2789) M. Gillon, A. H. M. J. Triaud, J. J. Fortney, B.-O. Demory, E. Jehin, M. Lendl, P. Magain, P. Kabath, D. Queloz, R. Alonso, D. R. Anderson, A. Collier Cameron, A. Fumel, L. Hebb, C. Hellier, A. Lanotte, P. F. L. Maxted, N. Mowlavi, B. Smalley May 31, 2012 astro-ph.EP We present twenty-three transit light curves and seven occultation light curves for the ultra-short period planet WASP-43 b, in addition to eight new measurements of the radial velocity of the star. Thanks to this extensive data set, we improve significantly the parameters of the system. Notably, the largely improved precision on the stellar density (2.41+-0.08 rho_sun) combined with constraining the age to be younger than a Hubble time allows us to break the degeneracy of the stellar solution mentioned in the discovery paper. The resulting stellar mass and size are 0.717+-0.025 M_sun and 0.667+-0.011 R_sun. Our deduced physical parameters for the planet are 2.034+-0.052 M_jup and 1.036+-0.019 R_jup. Taking into account its level of irradiation, the high density of the planet favors an old age and a massive core. Our deduced orbital eccentricity, 0.0035(-0.0025,+0.0060), is consistent with a fully circularized orbit. We detect the emission of the planet at 2.09 microns at better than 11-sigma, the deduced occultation depth being 1560+-140 ppm. Our detection of the occultation at 1.19 microns is marginal (790+-320 ppm) and more observations are needed to confirm it. We place a 3-sigma upper limit of 850 ppm on the depth of the occultation at ~0.9 microns. Together, these results strongly favor a poor redistribution of the heat to the night-side of the planet, and marginally favor a model with no day-side temperature inversion. The rapid rotation and complex magnetic field geometry of Vega (1006.5868) P. Petit, F. Lignières, G.A. Wade, M. Aurière, T. Böhm, S. Bagnulo, B. Dintrans, A. Fumel, J. Grunhut, J. Lanoux, A. Morgenthaler, V. Van Grootel Aug. 23, 2010 astro-ph.SR The recent discovery of a weak surface magnetic field on the normal intermediate-mass star Vega raises the question of the origin of this magnetism in a class of stars that was not known to host magnetic fields. We aim to confirm the field detection and provide additional observational constraints about the field characteristics, by modelling the magnetic geometry of the star and by investigating the seasonal variability of the reconstructed field. We analyse a total of 799 circularly-polarized spectra collected with the NARVAL and ESPaDOnS spectropolarimeters during 2008 and 2009. We employ a cross-correlation procedure to compute, from each spectrum, a mean polarized line profile with a signal-to-noise ratio of about 20,000. The technique of Zeeman-Doppler Imaging is then used to determine the rotation period of the star and reconstruct the large-scale magnetic geometry of Vega at two different epochs. We confirm the detection of circularly polarized signatures in the mean line profiles. The amplitude of the signatures is larger when spectral lines of higher magnetic sensitivity are selected for the analysis, as expected for a signal of magnetic origin. The short-term evolution of polarized signatures is consistent with a rotational period of 0.732 \pm 0.008 d. The reconstructed magnetic topology unveils a magnetic region of radial field orientation, closely concentrated around the rotation pole. This polar feature is accompanied by a small number of magnetic patches at lower latitudes. No significant variability in the field structure is observed over a time span of one year. The repeated observation of a weak photospheric magnetic field on Vega suggests that a previously unknown type of magnetic stars exists in the intermediate-mass domain. Vega may well be the first confirmed member of a much larger, as yet unexplored, class of weakly-magnetic stars.
CommonCrawl
Dried strawberries as a high nutritional value fruit snack Jolanta Kowalska ORCID: orcid.org/0000-0003-1723-56691, Hanna Kowalska2, Agata Marzec2, Tomasz Brzeziński1, Kinga Samborska2 & Andrzej Lenart2 Food Science and Biotechnology volume 27, pages 799–807 (2018)Cite this article The purpose of this study was to determine the possibility of using a chokeberry juice concentrate as a component of osmotic solution and convection-microwave-vacuum drying applying to obtain dried pro-health-promoting strawberries. The research material was Honeoye strawberries, which were dehydrated in sucrose and sucrose with chokeberry juice concentrate addition, and then subjected to microwave-convection-vacuum or freeze-drying. Analyses were conducted to determine the influence of the applied processes on vitamin C content, total polyphenols, antioxidant activity, and sensory properties in dried fruit. Study results confirmed the possibility of using a chokeberry juice concentrate as a component of the osmotic solution, especially with regard to polyphenolics content and antioxidant activity. In addition, convection-microwave-vacuum drying was shown to be a promising technology for the production of dried strawberries, with high pro-health potential and acceptable sensory qualities. Working on a manuscript? Avoid the most common mistakes and prepare your manuscript for journal editors. Seasonality is one of the key factors which determine the need for fruit processing primarily for juices, beverages and concentrates, but also for solids and frozen or dried products [1]. The popularity of strawberries is due to their sensory qualities as well as low caloric value, the content of easily digestible sugars, organic acids, minerals and also vitamins and ingredients with antioxidant properties [2, 3]. The health benefits of strawberries are determined by the abundance of biologically-active compounds that support the natural resistance of the organism, including ellagic acid (hexahydroxydiphenic acid), polyphenolics, mainly flavonoids, which help neutralize free radical damage, thereby reducing the risk of development of cardiovascular disease [4, 5]. Due to their short shelf life, strawberries are subjected to various technological processes, mainly freezing and also processing into jams or beverages. One of the most important methods is drying, which affords the possibility of extending the shelf life of strawberries, and also manufacturing new products. Regardless of the drying methods applied, high temperature applied in the drying process causes the loss of bioactive compounds and changes in sensory properties of fruit. Hence, it is essential to adjust a drying method to various raw materials—in terms of its type and parameters—to optimize properties of the dried fruit. In addition, pre-treatment processes such as osmotic dehydration with various solutions are used to ensure the desired nutritional and sensory properties of dried products [6]. The most commonly used osmotic substances are carbohydrates. Products with high sugar content are not advisable in fruit snacks production for nutritional reasons, therefore natural bioactive ingredients are used instead. The use of fruit juice concentrates and also fruit pomace extracts can allow manufacturing products enriched with bioactive components. Chokeberry is a fruit rich in antioxidants such as anthocyanins, flavanols or procyanidins, which also contains vitamins (C, B2, B6, E, P, PP). The use of chokeberry concentrate can enrich fruit with bioactive ingredients, partially lost in technological processes [7]. The aim of this study was to analyze the pro-health potential of chokeberry juice concentrate as an osmotic substance to enhance bioactive properties of dried strawberries produced with various drying techniques. The study was conducted with strawberries of Honeoye variety at the stage of trade maturity (diameter of 25–30 mm), purchased directly from the manufacturer. The physiological state of the fruit indicated it was completely formed and grown, the most tasty, fully colored and firm. Sucrose was purchased at a local shop and chokeberry Aronia melanocarpa juice concentrate (CJC) of about 64°Brix [9870.6 mg of gallic acid/100 g dry matter; DPPH (2.2,-diphenyl-1-picrylhydrazyl) 50: 0.058] at RAUCH Polska Sp. z o.o. in Płońsk. Osmotic dehydration (OD) OD of strawberries was carried out in solutions (sucrose and distilled water) at 50°Brix concentration. Solutions of sucrose and a mixture (Suc-CJC) of sucrose (Suc) and chokeberry juice concentrate (CJC) (diluted from 65 to 50°Brix) in a ratio of 1:1 (w:w) were used. The ratio of the weight of raw material to solution was constant and reached 2:1. The process was conducted in water bath with shaking (ELPAN-type 357) at temperature of 60 °C for 120 min, with continuous stirring at 1 Hz amplitude. Strawberries were dried without pre-treatment (control sample) or with OD and using different methods: the convection drying (120 min, 50 °C, air velocity 2 m/s) followed by microwave-vacuum drying ("puffing") at: microwave power 400 W, pressure 35 hPa, time about 6 min, temperature 50–70 °C, the freeze-drying; first freezing in shock freezer at − 40 °C for 120 min followed by vacuum drying at 25 °C, 24 h, 100 Pa. Dry matter content of the samples was determined based on the standard PN-ISO-1026: 2000 [8]. The content of vitamin C in strawberries was determined spectrophotometrically at a wavelength of λ = 500 nm [9]. The principle of the method consists in the extraction of vitamin C with oxalic acid and its quantitative oxidation in the acidic medium to dehydroascorbic acid by the excess of 2,6-dichlorophenolindophenol. The results were calculated on the basis of the regression equation from the calibration curve and expressed as mg vitamin C per 100 g d.m. (dry matter). The method with the Folin-Ciocalteu reagent [4] was used to determine the content of polyphenolic compounds in the samples of strawberries. Acetone extracts were prepared by adding 80% acetone to 2.0 g of fresh or 0.2 g of dried strawberries. The samples were homogenized and filtrated, and then distilled water and Folin-Ciocalteu reagent were added to the resultant extract. Afterwards, the samples were mixed and sodium carbonate was added. Thus prepared samples were kept for 60 min with no access of light. Absorption was measured with a Helios γ ThermoSpectronic spectrometer at wavelength of 750 nm against test zero. Standard curve was plotted for chlorogenic acid considering its various concentrations and absorption measurements. The total pholyphenols content was calculated in the samples based on the determined equation of the standard curve considering dilutions. Results were expressed in mg of chlorogenic acid per 100 g d.m. (dry matter). The modified method proposed by Wu et al. [10] was used to determine the antioxidative potential of the samples, expressed by DPPH radical scavenging ability. Extracts were prepared like for determination of polyphenolics content. The basic DPPH extract was prepared by dissolving 1.2 mg of DPPH in 50 cm3 of methanol (99% of technical cleanliness), for which the zero test was prepared. The Helios γ ThermoSpectronic spectrometer was used for absorption measurement at a wavelength of 515 nm. The antioxidative activity of the analyzed extracts [A] was recalculated based on absorption measurement results of the exact and control test in Helios γ ThermoSpectronic spectrometer at the wavelength of 515 nm, against 99 v/v methanol. Antioxidative activity against DPPH radicals was determined using the following equation: $$ {\text{A}} = \frac{{{\text{A}}_{{{\text{c}} }} - {\text{A}}_{\text{a}} }}{{{\text{A}}_{\text{c}} }} \times 100\% $$ Ac, control test absorbance, Aa, applicable test absorbance. All assays were performed at least three times in parallel series, and each determination was conducted in five replications. Sensory evaluation was carried out according to ISO 13299: 2016 [11] by out 15 panelist independent. The assessment was conducted on a 10-point scale with boundary conditions, where 1 meant the undesirable, least intense product, while 10 denoted the desirable product with the most desired overall sensory quality. The analyzed qualitative attributes were as follows: color—from red to dark red, with green spots visible, typical of the fruit; taste—sweet, with a slightly perceptible sour taste, juicy; flavor—slightly fruity, fresh fruit, characteristic for fresh strawberries; crispness—hardening—when pressed with a finger the product is not deformed, breaking/biting requires great force, overall sensory quality—acceptable—unacceptable. The statistical analysis of results was conducted using a multi-way analysis of variance based on ANOVA (StatSoft-Statistica 12.0.). Two factors were investigated, the type of osmotic solution used for fruit pre-treatment and drying method, at a significance level of p = 0.05. Significant differences between means were determined with the Tukey's Multiple Range Tests. The influence of convective with microwave-vacuum drying ("puffing") and freeze-drying on dry matter content The average dry matter content of fresh strawberries was 7.53% (Table 1). Such the low dry matter content means high water content that determines taste values, mainly juiciness of the fruit. The consumption of such fruit ensures good hydration of the body. Simultaneously, it is conducive to the development of microorganisms, hence storage of such fruit is limited [12]. Various methods of drying promote a significant loss of water in fruit. Osmotic pre-treatment and also the type of osmotic solution had a significant impact on dry matter content of dried strawberries (Table 1). Literature works described various extents of diffusion of osmotic substance particles as affected by their molecular weight [6]. Partial replacement of sucrose with the chokeberry juice concentrate resulted in a decrease in dry matter content (mainly sugar) in favor of monosaccharides naturally occurring in the fruit. Table 1 Dry mater content of osmo-dehydrated strawberries in different osmotic solutions and dried by freezing or convection with continues of microwave-vacuum method ("puffing") There was a significantly higher dry matter content in the freeze-dried samples compared to these subjected to convection followed by microwave-vacuum drying ("puffing"), which may be due to the principle of the method (Tab. 1). According to Sunjka et al. [1], Changrue et al. [13] and De Bruijn et al. [3], microwave energy is absorbed primarily by water contained in the material. The energy absorption by the wet material depends on its moisture distribution and causes selective heating of its interior parts, thus protecting low-moisture parts, e.g. material surface, from overheating [14]. Microwave heating causes volumetric heating and vapor is generated inside the product, developing internal pressure gradients that cause water flow from the interior to the surface of the material. In this way, food shrinkage is reduced [15]. The combined methods, including osmotic dehydration and the combination of microwave drying with convective drying, and the use of reduced pressure can shorten the time of drying and thus allow maintaining more components labile, e.g. vitamins, as well as the quality characteristics, such as no color changes [1, 16]. The influence of convective with microwave-vacuum drying ("puffing") and freeze-drying on vitamin C content According Proteggente et al. [17] and Ramful et al. [20], strawberries can be classified as vitamin C-rich fruit. While the recommended daily intake for adults is 70 mg, vitamin C is non-toxic even in high doses, and hypervitaminosis of vitamin C has not been scientifically proven [21, 22]. Analyses showed about 490 mg of vitamin C in 100 g dry matter of fresh strawberries. These results are comparable to the values obtained by Wojdyło et al. [18], Giampieri et al. [23], and Gamboa-Santos et al. [19]. Because of the short shelf life of strawberries, resulting from the high water activity and associated microbial and enzymatic susceptibility, these fruit are often subjected to a variety of drying procedures. Due to high temperature effects of these procedures, the content of labile compounds is reduced, vitamin C in particular [12]. As demonstrated by Santos and Silva [24], Goula and Adamopoulos [25], and Nuñez-Mancilla et al. [26], vitamin C is the least resistant to external factors such as oxygen, light, temperature, pressure and exposure time. Therefore, many studies on food processing take vitamin C as an indicator of food quality. Due to the complexity and variety of factors which influence changes in vitamin C content in foods [26, 27], in the present study the changes in its content through initial osmotic dehydration and the application of two methods of fruit drying were compared. The tested dried strawberries had a significantly lower vitamin C content compared to fresh fruit (Fig. 1). The high concentration of the carbohydrate used for osmotic pre-treatment preserved vitamin C in the dried fruit (Fig. 1). A significantly higher vitamin C content (57–81% of freeze-dried and 19–40% of puffed) was determined in dried strawberries previously subjected to osmotic dehydration than in these with no pre-treatment. The influence of kind of osmotic solution and drying technics on the content of vitamin C in strawberry samples dried by "puffing" or freeze-drying way; no pre-treated (No OD), pre-osmotic dehydrated in sucrose (Suc) or in sucrose with chokeberry juice concentrate solution (Suc-CJC). The same letter, a, b, c, indicates a lack of statistically significant differences in kind of osmotic solution and A, B in drying technics Vitamin C content of strawberries was statistically significant affected by the type of osmotic solution applied. Strawberries dehydrated in a sucrose solution with the chokeberry juice concentrate contained by 17–19% more vitamin C then these dehydrated in sucrose with no additive. The relatively low content of vitamin C in chokeberry fruit (2.4 mg per 100 g fruit) has affected its low content in dried strawberries pre-dehydrated in the osmotic solution with the concentrate. In addition, some part of vitamin C was also lost during chokeberry fruit processing into a concentrate form. Furthermore, this study demonstrates significantly higher vitamin C content in freeze-dried strawberries compared to these subjected to microwave-convective-vacuum drying (Fig. 1). According to Santos and Silva [24], citing Heng et al. [28] and Vial et al. [29], the protective effect on vitamin C may be attributed to the osmotic substance—sugar. As indicated by Santos and Silva [24], the mechanism of vitamin C degradation primarily depends on water content. Goula and Adamopoulos [25] indicated intensification of vitamin C losses along with a decrease in moisture content reaching up to 70%. Apart from various factors affecting vitamin C degradation, as shown in numerous studies, its retention depends on the applied temperature, microwave power, pressure, type of pre-treatment (e.g. osmotic dehydration), and many others [26, 27]. Riva et al. [30] showed that the possibility of deactivating enzymes responsible for the oxidation and degradation of vitamin C under the influence of high temperatures (e.g. osmotic dehydration or microwave-convection drying), which can also prevent losses of this labile component. These studies have shown that freeze-drying at 25 °C allowed more vitamin C to be preserved, which may result from lower process temperatures and reduced oxygen availability at vacuum. Nevertheless, higher temperature and short time of the second technique of drying: 50 °C for 120 min during convection and 50–70 °C for about 6 min during microwave-vacuum drying, resulted in only 16–29% lower increase of vitamin C content in dried strawberries. Thus, the "puffing" method of fruit drying is highly beneficial considering vitamin C content of the dried fruit. The influence of convective with microwave-vacuum drying ("puffing") and freeze-drying on polyphenolics content The high antioxidative capacity of strawberries is related to their polyphenolics [31, 32]. Both technological processes as well as storage can affect changes in their content, i.e. some of them may reduce it while others may cause polyphenolics content to increase. In a research of De Bruijn et al. [3], it was shown that the metabolic pathways of polyphenolics can be stimulated by stress, e.g. modified storage conditions, with limited oxygen availability and without light. High temperatures can induce losses of polyphenolics, but the presence of proteins and carbohydrates in the nutritional matrix may protect against the effects of peroxidase and polyphenol oxidase, thereby protecting the product from losses of antioxidative compounds [3]. The analyzed fresh Honeoye strawberries were characterized by a total polyphenols content of 27.7 g gallic acid in 100 g of the dry matter (Fig. 2). Strawberries are rich source of ellagitannins, anthocyanins, and procyanidins [33]. However, their contents differ among strawberry varieties, and additionally depend on crop conditions or storage conditions. The purpose of this study was to analyze changes in the content of selected bioactive components as a result of the applied osmotic dehydration and drying processes. As a result of osmotic pre-treatment and drying, the content of polyphenolics in strawberries reduced significantly, i.e. by 12–66% (Fig. 2). However, there was no statistically significant difference in the polyphenolics content, regardless of osmotic pre-treatment and drying technique. A tendency was observed for the greatest loss of polyphenolics in the no osmotically-dehydrated and only dried samples (16.8–18.7 g/100 g d.m.). The highest content of polyphenolics was determined in the dried strawberries osmotically pre-treated in sucrose solution with the addition of chokeberry juice concentrate (22.3–24.9 g/100 g d.m.). The influence of kind of osmotic solution and drying technics on the content of polyphenols in strawberry samples: dried by "puffing" or freeze-drying way; no pre-treated (No OD), pre-osmotic dehydrated in sucrose (Suc) or in sucrose with chokeberry juice concentrate solution (Suc-CJC). The same letter, a, b, c, indicates a lack of statistically significant differences in kind of osmotic solution and A, B in drying technics When referring to the needs of the consumer, the change in total polyphenols content is expressed in terms of product weight (fresh or dried strawberries). The presence of polyphenols in plant tissues is a response to the stress induced by various factors including a rise or fall in temperature. Due to the effects of external factors, polyphenolics can undergo various changes—oxidation, hydrolysis or condensation. The least resistant to environmental changes are anthocyanins, while the predominant in strawberry ellagitannins behave in a different way [3, 33]. These hypotheses are reflected in the results of this study. The pre-osmotic treatment with a combination of convective and vacuum-microwave drying ("puffing") was shown to be effective in preserving the heat- and oxygen-sensitive components such as polyphenolics and ascorbic acid of dried strawberries. However, the freeze-dried samples were characterized by the slightly higher total polyphenols content, which could support the beneficial effect of low temperatures on the behavior of these compounds. Already 10 or more minutes of osmotic treatments at 60–65 °C resulted in the inactivation of polyphenol oxidase and peroxidase enzymes, which significantly reduced the oxidation of polyphenolics [3]. However, both the effect of elevated temperature and oxygen availability affected the loss of the total polyphenolics, especially anti-cyanide. The results obtained in this study confirm the above conclusions. The drying process affected the total polyphenols content of the analyzed samples. The protective effect of sugars on polyphenolic compounds has been demonstrated. Total polyphenols content in the sucrose-dehydrated samples was higher compared to the untreated samples (Fig. 2). The higher content of polyphenolics was also determined in the samples dehydrated in the sucrose solution with added chokeberry juice concentrate. The content of polyphenolics in freeze-dried strawberries after osmotic dehydration in a solution of chokeberry was by only about 11% lower than in the raw material. It confirms the validity of using chokeberry juice as a component of the osmotic solution to enrich dried fruit. The influence of convective with microwave-vacuum drying ("puffing") and freeze-drying on antioxidative activity Antioxidative activity is mainly due to the reducing of polyphenolics properties [34]. Based on preliminary studies [6], stable DPPH radicals have been used for this analysis. There was no statistically significant difference in the antiradical activity, regardless of osmotic pre-treatment and drying technique. A tendency for the lowest value of this indicator was observed in the osmotically pre-treated samples dehydrated in a sucrose solution with chokeberry juice concentrate. According to Al-Musharfi et al. [34] citing Hossain et al. [35], there is a correlation, most commonly positive, between antioxidative activity and ascorbic acid content and polyphenolics content in fruit. This study demonstrated an opposite relationship between vitamin C content and total polyphenols content and antioxidative activity (Fig. 3). The statistical inference showed no correlation between vitamin C content and antioxidative activity against stable DPPH radicals (Table 2). The content of total polyphenolics was also not correlated with the antioxidative activity. Nevertheless, all analyzed samples were characterized by high antioxidative activity, demonstrating the ability to absorb and neutralize free radicals in the human body. Many authors showed a correlation between the antioxidative activity and polyphenolics content [35]. However, as demonstrated by Al-Musharfi et al. [33] despite its high value the antioxidative activity was not correlated with the polyphenolics content. Similar dependencies have been demonstrated in these studies. The influence of kind of osmotic solution and drying technics on the antiradical activity DPPH in strawberry samples dried by "puffing" or freeze-drying way; no pre-treated (No OD), pre-osmotic dehydrated in sucrose (Suc) or in sucrose with chokeberry juice concentrate solution (Suc-CJC). The same letter, a, b, c, indicates a lack of statistically significant differences in kind of osmotic solution and A, B in drying technics Table 2 Correlations between chemical and sensory properties of dried strawberries The influence of convective with microwave-vacuum drying ("puffing") and freeze-drying on sensory properties Antioxidative properties that are important to the human/consumer body are also important for their preferences and acceptance of the product. Therefore, the sensory evaluation of was performed for the obtained samples in terms of their color, taste, flavor and overall quality (overall desirability). Sensory panel members indicated dried strawberries as highly desirable, with the high qualities tested (Fig. 4). Strawberries dried using convection-microwave-vacuum were evaluated higher than these subjected to freeze-drying treatment. Values of all the sensory attributers differed significantly as affected by pre-treatment, drying methods and osmotic substances. However, the scores were in fairly narrow ranges. In terms of taste, the highest scores (5.0–6.3) were given to the samples without pre-osmotic treatment and dried with both methods. Similar values (5.3–5.6) were obtained for the strawberries pre-osmotically dehydrated in the mixture of sucrose and chokeberry juice concentrate (Suc-CJC) solution, also depending on the drying method. The samples that were first osmo-dehydrated in the sucrose solution (Suc) and then dried respectively by "puffing" (3.1) and freeze-drying (3.7) were the least acceptable. Probably it was due to too much perceptible sweetness (Fig. 4). The color of freeze-dried strawberries was evaluated higher and in a much wider range of 3.7–6.7 than in these dried by "puffing" (3.8–4.3). Flavor of all dried fruits was scored high and at a very similar level (4.1–4.9). Depending on the drying method, some of the attributes were scored higher and others lower, although the highest desirability (overall sensory quality) was shown for the samples pre-dehydrated in the sucrose solution with chokeberry juice concentrate (Suc-CJC). It suggests that the products obtained in this way could be accepted by consumers. The influence of kind of osmotic solution and drying technics on sensory properties of strawberry samples dried by: (A) "puffing" (Puf) and (B) freeze-drying (FD) way; no pre-treated (No OD), pre-osmotic dehydrated in sucrose (Suc) or in sucrose with chokeberry juice concentrate solution (Suc-CJC) Gamboa-Santos et al. [19] performed sensory evaluation of dried and rehydrated strawberries in different solutions and showed higher flavor and texture values for the freeze-dried fruit compared to the convection-microwave-vacuum dried samples. They explained these observations by higher porosity of freeze-dried strawberries and a more compact structure of fruit dried by "puffing". However, considering the other sensory attributes analyzed, fruit obtained via both drying methods had similar notes. Differences between properties of convection-microwave-vacuum and freeze-dried strawberries were not significant, which suggests that the products obtained by both methods could be accepted by consumers. A complex evaluation of the properties of dried strawberries To determine similarities and differences between the analyzed types of dried strawberry samples in the aspect of the evaluated chemical and sensory properties, the main components were analyzed taking into account the mean values of the results obtained for each of the tested indicator [Fig. 5(A)]. PCA analysis: (A) diagram PCA, (B) similarities and differences The number of tested variables was reduced to two main PC1 and PC2 components, which explained 96.12% of the variation in pro-health and sensory properties of dried strawberries. A significant positive correlation was found between total polyphenols content of dried strawberries (snack) with other bio-component (vitamin C) and also sensory descriptors such as flavor, color and overall quality [Fig. 5(A), Table 2]. In contrast, a significant negative correlation was observed between dry matter content and some factors as vitamin C content, polyphenols content, and also overall quality, color and flavor. The negative correlation between vitamin C and dry matter contents is likely to result from vitamin C leaching during initial osmotic treatment and also from drying treatments [18]. The use of pre-osmotic dehydration with different types of osmotic solution allowed dividing the obtained data as in Fig. 5(B). The use of osmotic dehydration as a pre-treatment method, the type of osmotic solution, and the drying method applied had a significant impact on the division of results achieved into separate groups. Basically, the dried strawberries were essentially different in composition and sensory properties from fresh fruit. On the other hand, two other groups were distinguished mainly by the use of another osmotic substance for the initial dehydration of strawberries. Strawberries are beneficial dietary sources of bioactive compounds, including polyphenolics as well as vitamin C. Due to the short availability of fresh fruit, it is reasonable to process strawberries to be available throughout the year. Dried strawberries can be a good alternative to snacks satisfying not only the basic nutritional needs but also providing bio-components valuable for the human body. Sensory characteristics are also important because of product acceptance. Dried strawberries enriched with antioxidants derived from concentrated chokeberry juice with acceptable sensory properties may be a snack as well as an additive to other products. Microwave-convection-vacuum drying ("puffing") is an alternative to freeze-drying. Determinations of contents of polyphenolics and vitamin C in dried materials subjected to preliminary dehydration/enrichment and then drying may be useful indicators during development of technologies for the production of high-quality bio-snacks acceptable by consumers. Sunjka PS, Rennie TJ, Beaudry C, Raghavan GSV. Microwave-convective and microwave-vacuum drying of cranberries: a comparative study. Dry Technol. 22: 1217–1231 (2004) de Souza VR, Pereira PAP, da Silva TLT, de Oliveira Lima LC, Pio R, Queiroz F. Determination of the bioactive compounds, antioxidant activity and chemical composition of Brazilian blackberry, red raspberry, strawberry, blueberry and sweet cherry fruits. Food Chem. 156: 362–368 (2014) De Bruijn J, Rivas F, Rodriguez Y, Loyola C, Flores A, Melin P, Borquez R. Effect of vacuum microwave drying on the quality and storage stability of strawberries. J Food Process Preserv. 40: 1104–1115 (2015) Forbes-Hernandez TY, Gasparrini M, Afrin S, Bompadre S, Mezzetti B, Quiles JL, Giampieri F, Battino M. The healthy effects of strawberry polyphenols: Which strategy behind antioxidant capacity? Crit Rev Food Sci Nutr Suppl. 1:46–59 (2016) Giampieri F, Forbes-Hernandez TY, Gasparrini M, Afrin S, Cianciosi D, Reboredo-Rodriguez P, Varela-Lopez A, Quiles JL, Mezzetti B, Battino M. The healthy effects of strawberry bioactive compounds on molecular pathways related to chronic diseases. Ann NY Acad Sci. 1398: 62–71 (2017) Kowalska H, Marzec A, Kowalska J, Ciurzyńska A, Czajkowska K, Cichowska J, Rybak K, Lenart A. Osmotic dehydration of Honeoye strawberries in solutions enriched with natural bioactive molecules. LWT Food Sci Technol. 85:500–505 (2017) Buran TJ, Sandhu AK, Li Z, Rock CR, Yang WW, Gu L. Adsorption/desorption characteristics and separation of anthocyanins and polyphenols from blueberries using microporous adsorbent resins. J Food Eng. 128: 167–173 (2014) PN-ISO1026:2000 Fruit and vegetable products—determination of dry matter content by drying under reduced pressure and of water content by azeotropic distillation PN-A-04019:1998—Produkty spożywcze—Oznaczanie zawartości witaminy C/Food products—determination of of vitamin C content (in Polish) Wu C, Huang M, Lin Y, Ju H, Ching H. Antioxidant properties of Cortex fraxini and its simple coumarins. Food Chem. 104: 1464–1471 (2007) ISO PN-EN ISO 13299:2016 Sensory analysis—Methodology—General guidance for establishing a sensory profile Feguš U, Žigon U, Petermann M, Knez Ž. Effect of drying parameters on physiochemical and sensory properties of fruit powders processed by PGSS-, vacuum- and spray-drying. Acta Chim Slov. 62: 479–487 (2015) Changrue V, Raghavan GSV, Gariépy Y, Orsat V. Microwave vacuum dryer setup and preliminary drying studies on strawberries. J Microw Power Electromagn Energy. 41: 36–44 (2007) Chandrasekaran S, Ramanathan S, Basak T. Microwave food processing—a review. Food Res Int. 52: 243–261 (2013) Zhang M, Tang J, Mujumdar AS, Wang S. Trends in microwave-related drying of fruits and vegetables. Trends Food Sci Technol. 17: 524–534 (2006) De Bruijn J, Borquez R. Quality retention in strawberries dried by emerging dehydration method. Food Res Int. 63: 42–48 (2014) Proteggente AR, Pannala AS, Paganga G, van Buren L, Wagner E, Wiseman S, van de Put F, Dacombe C, Rice-Evans CA. (2002) The antioxidant activity of regularly consumed fruit and vegetables reflects their phenolic and vitamin C composition. Free Radic Res 36: 217–233 Wojdyło A, Figiel A, Oszmiański J. Effect of drying methods with the application of vacuum microwaves on the bioactive compounds, color, and antioxidant activity of strawberry fruits. J Agric Food Chem. 57: 1337–1343 (2009) Gamboa-Santos J, Megías-Pérez R, Cristina Soria A, Olano A, Montilla A, Villamiel M. Impact of processing conditions on the kinetic of vitamin C degradation and 2-furoylmethyl amino acid formation in dried strawberries. Food Chem. 153:164–170 (2014) Ramful D, Tarnus E, Aruoma OI, Bourdon E. Bahroun T. Polyphenols composition, vitamin C content and antioxidant capacity of mauritian citrus fruit pulps. Food Res Int. 44: 2088–2099 (2011) Kunachowicz H, Przygoda B, Nadolna I, Iwanow K. Tables of composition and nutritional value of food. Wyd. Lekarskie PZWL Press (2005) (in Polish) Płocharski W, Markowski J, Pytasz U, Rutkowski K (2013) Fruit, vegetables, juices, their caloric content and nutritional value against the demand for energy and nutrients Part 2. The nutritional and health value of permitted health claims. Sci Tech Mag Ferment Fruit Veg Ind. 1: 24–28 (in Polish) Giampieri F, Tulipani S, Alvarez-Suarez JM, Quiles JL, Mezzetti B, Battino M. The strawberry: composition, nutritional quality, and impact on human health. Nutrition. 28: 9–19 (2012) Santos PHS, Silva MA. Retention of vitamin C in drying processes of fruits and vegetables—a review. Dry Technol. 26: 1421–1437 (2008) Goula AM. Adamopoulos KG. Retention of ascorbic acid during drying of tomato halves and tomato pulp. Dry Technol. 24:57–64 (2006) Nuñez-Mancilla Y, Pérez-Won M, Uribe E, Vega-Gálvez A, Di Scala K. Osmotic dehydration under high hydrostatic pressure: effects on antioxidant activity, total phenolics compounds, vitamin C and color of strawberry (Fragaria vesca). LWT Food Sci Technol. 52: 151–156 (2013) Barba FJ, Esteve MJ, Frigola A. Physicochemical and nutritional characteristics of blueberry juice after high pressure processing. Food Res Int. 50: 545–549 (2011) Heng K. Guilbert S. Cuq JL. Osmotic dehydration of papaya: influence of process variables on the product quality. Sci Des Aliments. 10: 831–848 (1990) Vial C, Guilbert S, Cuq JL. Osmotic dehydration of kiwi fruits: influence of process variables on the color and ascorbic acid content. Sci Des Aliments. 11: 63–84 (1991) Riva M, Campolongo S, Leva AA, Maestrelli A, Torreggiani D. Structure-property relationships in osmo-air-dehydrated apricots cubes. Food Res Int. 38: 533–542 (2005) Panico A, Garufi F, Nitto S, Di Mauro R, Longhitano R, Magri G, Catalfo A, Serrentino M, De Guidi G. Antioxidant activity and phenolic content of strawberry genotypes from Fragaria x ananassa. Pharm Biol. 47: 203–208 (2009) Concha-Meyer AA, D'Ignoti V, Saez B, Diaz RI, Torres CA. Effect of storage on the physico-chemical and antioxidant properties of strawberry and kiwi leathers. J Food Sci. 81: 569–577 (2016) Lipińska L, Klewicka E, Sojka M. The structure, occurrence and biological activity of ellagitannins: a general review. Acta Sci Pol Technol Aliment. 13: 289–299 (2014) Al-Musharfi NK, Al-Wahaibi HS, Khan SA. Comparison of ascorbic acid, total phenolic content and antioxidant activities of fresh juices of six fruits grown in Oman. J Food Process Technol. 6: 11 (2015) Hossain SJ, El-Sayed M, Aoshima H. Antioxidative and anti α-amylase activities of four wild plants consumed by pastoral nomads. Egypt Orient Pharm Exp Med. 9: 217–224 (2009) This work was financially supported by SUSFOOD ERA-Net/NCBiR (National Centre for Research and Development); project no 5/SH/SUSFOOD1/2014. Implementation period: 2014-2016, Poland. The work was also co-financed by a statutory activity subsidy from the Polish Ministry of Science and Higher Education for the Faculty of Food Sciences of Warsaw University of Life Sciences. Department of Biotechnology, Microbiology and Food Evaluation, Division of Food Quality Evaluation, Faculty of Food Sciences, Warsaw University of Life Sciences - SGGW, 159c Nowoursynowska St, 02-776, Warsaw, Poland Jolanta Kowalska & Tomasz Brzeziński Department of Food Engineering and Process Management, Faculty of Food Sciences, Warsaw University of Life Sciences, 159c Nowoursynowska St, 02-776, Warsaw, Poland Hanna Kowalska, Agata Marzec, Kinga Samborska & Andrzej Lenart Jolanta Kowalska Hanna Kowalska Agata Marzec Tomasz Brzeziński Kinga Samborska Andrzej Lenart Correspondence to Jolanta Kowalska. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Kowalska, J., Kowalska, H., Marzec, A. et al. Dried strawberries as a high nutritional value fruit snack. Food Sci Biotechnol 27, 799–807 (2018). https://doi.org/10.1007/s10068-018-0304-6 Revised: 05 December 2017 Issue Date: June 2018 Osmotic dehydration Freeze-drying
CommonCrawl
History of acid-base reactions Basic and acidic materials exist in nature, so it makes sense that our forefathers could know what happens when they react. I would like to know when and how did they learn about this. In particular: a. What is the earliest time in which people could witness acid-base reaction that causes fire or explosion (as in this video)? What are the earliest written accounts of such reactions? b. What are the earliest written accounts of neutralization reactions? c. When were acid-base reactions intentionally used, e.g. for neutralizations or to create explosions? acid-base history-of-chemistry Erel Segal-Halevi Erel Segal-HaleviErel Segal-Halevi $\begingroup$ Related fun fact (sort of): Seneca the Younger (c. 4 BC – AD 65) is quoted saying: 'Anger: an acid that can do more harm to the vessel in which it is stored than to anything on which it is poured'. I am not sure to what extent the ancient Greeks understood acids/bases, but they had the terminology $\endgroup$ – Toke Faurby Apr 3 '18 at 15:42 There's a bit of a misconception on what actually happens in the video linked in your question: The reactions between alkali metals, namely lithium ($\ce{Li}$) and sodium ($\ce{Na}$) and water, diluted acid and concentrated sulfuric acid are not neutralisations, but redox reactions, that is transfer of an electron from one reaction partner to the other. \begin{align} \ce{2H2O &<=> 2H+ + 2OH-\\ 2Na &-> 2Na+ + 2e-\\ 2H+ + 2e- &-> H2 ^} \end{align} If you take this three reactions as a (mathematical) equation system, with which you're probably more familiar with, and sum it up, the total reaction is \[\ce{2Na + 2H2O -> 2NaOH + 2 H2 ^} \] To put this into words: The metal is dissolved and forms metal hydroxide. The resilting solution is alkaline (basic), hence the name alkali metals. The reaction releases a lot of heat! Hydrogen gas is formed. Since the reaction releases a lot of heat, the hydrogen gas is burnt with the oxygen from the air, yielding water, This is the explosion that you noticed. Taken these vigorous reactions into account, it is not astonishing that in nature, sodium and and lithium do not exist as metals, but in the form of various salts. It wasn't until 1807 that sodium metal was actually made in the laboratory, followed by lithium in 1818. Turning back to your question on neutralisations, let's define what it actually means. (I apologize if I'm getting to colloquial here!) The term neutralisation is typically used to describe that two effects don't sum up to result in something stronger but rather incapacitate each other. In the case of acid-base reactions, this translates to: \[\ce{effective\ acid + effective\ base -> something\ ineffective + water}\] In water, a typical example would be the reaction of of a strong acid (such as hyrochloric acid) with equimolar amounts of strong base (such as sodium hydroxide): \[\ce{HCl + NaOH -> NaCl + H2O}\] Adding the right amount of a strong and corrosive acid to a solution of also corrosive and harmful caustic soda a harmless solution of sodium chloride (table salt) in water. The only side effect of the reaction is that the solution can get pretty hot, but an explosion is not to be expected here. Typically, usable explosions do not result from neutralisation reactions but from fast combustion reactions in with large volumes of gases are formed, possibly (but not necessarily) accompanied by the release of heat. Taking into account that strong acids had to be made by men, I'd be surprised to find that these materials were wasted in simple neutralisation reactions. You will however find historical accounts on the use of the acids or the bases. Nitric acid was used to separate silver from gold (Scheidewasser). Calcium oxide (burnt lime), the precursor of calcium hydroxide e.g. is used to make cement. mhchem Klaus-Dieter WarzechaKlaus-Dieter Warzecha $\begingroup$ Thanks for the detailed answser. So I understand that in ancient times people would not want to waste acids in neutralization reactions. But, did they know about accidental neutralizations? E.g. is there an account of someone complaining "My precious acid were wasted because an alkalic material was dropped into it", or something like this (not necessarily with the same terms)? $\endgroup$ – Erel Segal-Halevi Apr 9 '15 at 6:59 $\begingroup$ You are very much welcome! I'm afraid I can't answer that part. An expert in the history of chemistry is needed here, which I'm definitely not. Anyway I rather doubt that there are lots of recordings on these findings from ancient times. Mercury was known back then, but did Aristotle write about adding vinegar to cinnabar? No idea. I wouldn't go back beyond the 17th when looking for understandable sources. $\endgroup$ – Klaus-Dieter Warzecha Apr 9 '15 at 7:16 $\begingroup$ Klaus-Dieter Warzec, Why do you, in the above neutralization reaction, designate "salt" to be "something ineffective"? Shouldn't rather water, which contains the parts (acid $\ce{H^+}$ and base $\ce{OH^-}$ ) that have been made ineffective, have this label? — But the salt ions are basically the same as before the reaction since they are not more or less effective than before. $\endgroup$ – user45298 Jul 10 '17 at 1:09 Thanks for contributing an answer to Chemistry Stack Exchange! Not the answer you're looking for? Browse other questions tagged acid-base history-of-chemistry or ask your own question. Identifying Reaction Types What is the reason for strong acids/bases dissociating in water? Weird Wikipedia Section on Oxidizing Behavior of Nitric and Sulfuric Acids Removal of octadecanoic acid from paraffin wax The history of ethylene Can an iodine clock reaction work without using a strong acid? Neutralization between Ca(OH)2 + H2SO4 in a 30% hydrogen peroxide solution Can acid-base reactions occur in a non-aqueous medium? Fish Fertilizer acid / base reactions
CommonCrawl
Theoretical Computer Science Meta Theoretical Computer Science Stack Exchange is a question and answer site for theoretical computer scientists and researchers in related fields. It only takes a minute to sign up. Has the semantics of TeX (as a programming language) ever been formalized? It seems to me that the macro language employed by $\TeX$ can maybe be seen as some kind of term rewriting system or some kind of programming language with call-by-name scoping. Even modern implementations of the $\TeX$ engine (e.g. $\mathit{Xe}\TeX$) interpret code in a quite direct way and I'm not aware of any attempt at optimizing the execution (like modern optimizing interpreters can do). However, devising correct optimization passes for a language like $\TeX$ is going to be very difficult because of the "action at a distance" that macro redefinitions can have, and the ability of redefining macros by calling them by name. So implementing an hypothetical optimizing interpreter for $\TeX$ sounds a very difficult problem in practice but also a very useful one, since $\TeX$ is used all over math and science and slow compilation times are a known drawback of the system. Note that the majority of time is spent interpreting code, not computing the actual typesetting, especially when computationally heavy packages are used (such as tikz). Maybe a formal semantics for the language could be a start to address the problem. So has the semantics of the $\TeX$ programming language ever been formalized? pl.programming-languages semantics term-rewriting-systems gigabytesgigabytes $\begingroup$ Partial answer in tex.stackexchange.com/questions/4201/… $\endgroup$ – Amaury Pouly $\begingroup$ Thanks! Although I'm not interested into formalizing TeX's syntax into a context-free grammar, the answer is interesting. However I think it confuses levels a bit. Grammars are never enough to know if a piece of code in any language is wellformed or not, because other passes are needed such as type checking or variables look up. Nevertheless most languages grammars are described with BNFs modulo those aspects. Anyway, I'm more interested into the semantics of the macro language, not the grammar. $\endgroup$ – gigabytes $\begingroup$ To be honest the answer's author addresses this concern in the comments of other answers, the point being that in the case of TeX, parsing involves evaluation and thus to know if a piece of code is wellformed you may have to evaluate an arbitrary piece of code. That's again about syntax, anyway. $\endgroup$ $\begingroup$ In this blog entry rjlipton.wordpress.com/2011/03/09/tex-is-great-what-is-tex , Lipton relates that Knuth never formally defined $\TeX$. $\endgroup$ – Lamine $\begingroup$ Well, the only thing that comes close to what you suggest is initex, which is a "precompiler", basically you can have TeX perform certain operations, then stop its run, save the current state as a "format" (file.fmt) which is then loaded quite fast. This is actually what's going on with LaTeX itself: it's built over TeX core this way, similarly plain TeX, ConTeXt (though that's a bit more complicated), etc. $\endgroup$ (With apologies for a long answer that goes in a direction different from the scope of the site: frankly I was surprised to see the question here in the first place….) TeX was designed for typesetting, not for programming; so it is at best "weird" when considered as a programming language. — Donald Knuth, Digital Typography, page 235 I have read a lot over the last couple of years about the early history (circa 1977) of TeX, and a lot of what Knuth has written. My conclusion is that the moment we speak about "TeX (as a programming language)", something is wrong already. If we look at the early "design documents" for TeX written before (see TEXDR.AFT and TEX.ONE, published in Digital Typography), it is clear that Knuth was designing a system primarily intended for typesetting The Art of Computer Programming (he has said (e.g. here) that the main users he had in mind were himself and his secretary), with the idea that, suitably modified, it may be useful more generally. To save typing, for things one repeatedly had to do (e.g. every time TAOCP needed to include a quotation from an author, you'd want to move vertically by a certain amount, set a certain lineskip, pick up a certain font, typeset the quote right-aligned, pick up another font, typeset the author's name…), there were macros. You can guess the rest. What we have in TeX is a case of "accidentally Turing-complete" (more), except that it happened in the midst of a community (computer scientists and mathematicians, and DEK himself is to "blame" too) who were (unfortunately) too clever to ignore this. (Legend has it that Michael Spivak had never programmed before he encountered TeX, but he was so taken with it that he ended up writing AMS-TeX, at the time one of the most complicated set of macros in existence.) Because TeX was written to be portable across a large number of systems (which was a big deal at the time), there was always a temptation to do everything in TeX. Besides, because of his compiler-writing experience, Knuth wrote TeX like a compiler, and occasionally described it as one, and if the program that works on your input is a "compiler", then surely you're programming, right? You can read a bit more about how Knuth didn't intend for any programming to be done in TeX, and how he "put in many of TeX's programming features only after kicking and screaming", in this answer. Whatever his intentions were, as I said, people did start to figure out ways to (ab)use the TeX macro system to accomplish surprising feats of programming. Knuth found this fascinating and (in addition to adding some features into TeX itself) included a few of these in Appendix D "Dirty Tricks" of The TeXbook, but it turns out, despite the name, that "nine out of ten examples therein are used in the implementation of LaTeX". Let me put it another way: LaTeX, the macro system that Leslie Lamport wrote on top of TeX, as an idea, is a great one. Authoring documents in a semantic, structured, human-oriented way, rather than (Knuth) TeX's page-oriented way, (or as Lamport called it, logical rather than visual) is a great one. But implementing something as complicated as LaTeX using TeX macros rather than in a "proper" programming language is, in my view and at least if it were done today, somewhere between a giant mistake and an act of wanton perversity. Even Knuth is shocked that people don't just extend the TeX program instead of doing everything in TeX macros. Today there are much better ways to do "programming"; you can use an external program in any of the many languages widely available on most people's computers, or you can use LuaTeX and program in Lua (and do a better job than you ever could with TeX macros alone, because you can manipulate internal structures and algorithms at the right level). And if you do it right, you could have programs that work better or faster than those implemented in TeX macros. The task of making programs in TeX faster is almost amusing when seen in this light, and reminiscent to me of the final words of the paper describing another "accidentally Turing complete" programming "language": Tom Wildenhain's lovely "On the Turing Completeness of MS PowerPoint (video) from last year: While the PPTXTM proves the theoretical possibility of PowerPoint development, […]. Work also needs to be done in PowerPoint application optimization. There is a lot of potential here to exploit PowerPoint's automatic buffering of the next slide, which through careful slide placement may be used to greatly increase application performance. The anecdote that Lipton describes is illustrative. Not only has there never existed a formal semantics of TeX, there is also unlikely to be one. It is just too "weird" a "language" for that, and (as I hope I have explained above) it is not even intended as a language. For instance, you may think you are writing macros as functions, but introduce a single stray character (even a space) in it, and TeX immediately treats it as a typesetting instruction. In short: TeX reverts to typesetting at the earliest opportunity, and when it expands macros it does so grudgingly (impatient to get to its "real" work of typesetting), and these expansions can themselves depend on hundreds of kinds of "state" within the TeX program (the values of parameters like \hsize or \baselineskip, the contents of boxes and other registers…), which is why any formal semantics of TeX must necessarily be something that takes into account the entire state of the program and all its memory, until we end up with something like "the meaning of TeX code is whatever TeX does", in a form more complex than the TeX program itself. So fine, (if I've convinced you) TeX was not intended as a programming language and does not work like real ones, there is no formal semantics, and there are better ways to program today — but all this does not help with your actual question/problem, which is that in practice, many documents meant for processing by TeX do use complicated macros (like LaTeX and TikZ), stunning edifices of monstrous complexity built on top of each other. How can we make it faster and devise "optimization passes"? You will not get there with formal semantics IMO. I have thought recently about this, and the following are some preliminary thoughts. My impression is that Knuth was one of the experienced compiler-writers in the 1960s (that's why he got asked to write the compilers book that turned into The Art of Computer Programming), and TeX is (in many ways) written the way compilers were written in the 1970s, say. Compiler techniques and design have improved since then, and so can the TeX program be. Here are some things that can be done, by way of speeding things up: At heart, TeX is written like an "interpretive routine", where the TeX "eyes" and "mouth" (its input routines) deliver instructions to its "stomach" (its semantic routines), to be executed one by one. (You can see a list in part 15 of the TeX program.) For example, when TeX's eyes/mouth encounter \hfill or \hskip in its input, the stomach gets a "hskip" command, that it acts on. This is similar to what are called bytecode interpreters today, and there may be value in refactoring the TeX program to emit these bytecodes/opcodes explicitly, so that we may be able to use existing (more conventional today) compiler techniques. Or at least cache them to avoid redoing work. There are of course many challenges: The execution of a command in the "stomach" usually still involves reading the input, i.e. the work of the input routines and semantic routines do not happen in separate phases. E.g. the "hskip" command, if given \hskip (rather than say \hfill) will invoke scan_glue to read a glue specification from the input, which in turn can involve expanding macros and so on until enough tokens are found for the glue, leaving the input stack in a substantially different state. Engines like eTeX and pdfTeX and XeTeX and LuaTeX introduce new commands and primitives (the eTeX / pdfTex primitives are practically used by everyone in practice); you'll need to support them too, not just those in the original Knuth's TeX program. We could do something like "speculative execution", processing future paragraphs (maybe starting at natural checkpoints like new sections or chapters) in parallel (using multiple cores), keeping track of all the TeX internal state they use (depend on), and throwing away that work (and redoing it) if later we find out that an earlier paragraph ends up changing some of that state. At the moment TeX runs entirely sequentially on 1 processor; typical hardware has moved in a different direction and multiple cores are available. Even simpler, we could simply cache the work (what TeX state was accessed and modified) by a certain section of the input file. (We could do this caching at the level of the input—the net result of expanding all macros—or at the level of what set of boxes were assembled, or all the way to the total state of the program.) E.g. the contents inside a \begin{tikzpicture} … \end{tikzpicture} is unlikely to depend a lot on TeX state like the page number counter, so when we recompile the TeX document we can simply reuse all the work — if we have kept track of enough information to know that it is safe to do so. (Of course TikZ in particular has ways to externalize this and include the results, but the idea is more general.) We can use techniques (e.g. those used in functional programming) to do some TeX processing with "holes" — e.g. right now, when you write \ref{foo} in LaTeX to refer to a (say future) section number, it works only in two compilation passes: first the entire document is processed (all paragraphs typeset, floats positioned on pages, etc.) with the section numbers being written out to an auxiliary file, then on a second pass all the work is done again, with the section number actually available this time. (This kind of hack may have been inevitable at the time, and I know the impact on the running time is "only a constant factor", but….) Instead, what if we could simply process the document with a "hole" (a box with undetermined contents but some estimated width) left for the section number, then at the end of the document processing populate the box? (Yes, our estimated width may turn out wrong and the paragraph may need reprocessing and consequently even the page, but we could either do the work if necessary, or accept, for speed, a mode in which we'll allow a wrong width for the section number.) Similar techniques can work for interactive editing of a TeX document: when you edit a paragraph it can be processed "live", with future paragraphs simply moved down the galley (say). We know it's possible, as there already exist (commercial) TeX implementations that do this, e.g. BaKoMaTeX and Texpad and the former Textures. (See the video on the home page of BaKoMa-TeX and similarly TeXpad's, e.g. this video — I tried the latter and it was unbearably buggy in practice though.) Not to be underestimated: the value of showing things to the user, making TeX more debuggable. Right now, users see only their TeX input and have no idea exactly what work TeX is doing, e.g. how much time it's spending on line-breaking for paragraphs, or on macro-expansion (and of which macros), what boxes it's assembling and throwing away, what specials are being written out by which package, etc. I (perhaps optimistically) believe that there exist users who would like to see this information and would find it useful e.g. to know whether the weird package they're using for shading equations with a gradient in the background is cheap (adding little to the processing time) or not. By seeing where a lot of wasteful work is being done, they could throw some of it away (at least until their final print run). (This is somewhat like compilers or other tools inserting profiling information into programs.) Making TeX more transparent and debuggable may be a huge usability improvement, for instance. (TeX is already quite user-friendly and debuggable for its time IMO if we use mostly plain TeX with very few macros, but not with LaTeX or how the bulk of users encounter it today.) Also, any future work should probably take into account (build on) LuaTeX which is the best modification of TeX we have currently. All of these are just idle thoughts (I haven't implemented any of them, to know the effort required or how much speedup we'd gain), but I hope this goes some way towards answering your question or giving you ideas for future directions. ShreevatsaRShreevatsaR $\begingroup$ I surely agree with you that programming in TeX is masochistic but as you said, people do it anyway and, as you pointed out, the benefits of better tooling would go down to users the most. In the second part of your answer you touch many of the ideas I had in mind before asking the question. I might add that because of \widthof and similar, the termination of a loop might depend on the entire typesetting algorithms and font definitions. So that's really weird yes XD $\endgroup$ $\begingroup$ This answer needs a major rewrite (didn't have time to write a short one!), but super coincidentally, I just now came across this quote from Knuth in Peter Seibel's Coders at Work in answer to a question about formal correctness: "Or TeX, for example, is a formal mess. It was intended to be for human use, not for computer use. To define what it means for TeX to be correct would be incomprehensible. Some methods for formal semantics are so complicated that nobody can comprehend the definition of correctness." $\endgroup$ – ShreevatsaR $\begingroup$ So TeX is a programming language but I had to put in those features kicking and screaming. […] In a way I resent having every language be universal because they'll be universal in a different way. […] I was really thinking of TeX as something that the more programming it had in it, the less it was doing its real mission of typesetting. When I put in the calculation of prime numbers into the TeX manual I was not thinking of this as the way to use TeX. I was thinking, "Oh, by the way, look at this: dogs can stand on their hind legs and TeX can calculate prime numbers." $\endgroup$ $\begingroup$ Honestly I don't see Knuth's reason to add programming facilities to TeX by "kicking and screaming". TeX programming is not used to do arbitrary computation, but to build abstractions around problems, often coming from TeX syntax itself, so that users can more powerfully use it for typesetting. So I don't agree with Knuth saying the more programming he put in it the less it would do typesetting. Maybe if he accepted the need for general programmability from the start he could have come up with something way better. The same thing happened with the web, and now the world runs on JavaScript. $\endgroup$ No, to my knowledge there has been no work on formalizing TeX of the kind you are interested in. (What follows is a subjective and personal commentary). I think it is an intriguing and well-posed idea, and your motivation of using it to perform optimizations sounds reasonable -- another related question is whether you could define a bytecode format for it to speed up interpretation. On the other hand, the idea has two downsides. First, it is not clear to me that there is a large potential for optimizations (for example, what kind of program-preserving transformations could one perform to speed up computation?), as it may be that the language semantics is intimately related to parsing the character flow, and thus not very accommodating to the design of optimization-friendly intermediate representations. Second, the need for improvements in TeX interpretation speed is not well-established: the speed of batch speed building has remained reasonable thanks to hardware improvements. Cases where speedups could be welcome are complex graphics package (beamer presentations can take quite some time to build), packages embedding rich computations (but then another language may be more appropriate), and use cases requiring fast rebuild for instant user feedback (but then incrementality, rather than optimization, may be the point; a formal semantics would certainly help reason about incremental implementations as well). That is to say: this sounds like a fun, instructive topic, but it is not clear to me that the practical justifications for doing the work are strong. If someone was interested in doing it out of curiosity, that sounds like an excellent adventure, but otherwise there may be other ways to employ the same skillset whose impact would be more sought-after by end-users. gaschegasche $\begingroup$ Thanks. As you said, incremental compilation is maybe more interesting that optimisation here, especially if we think about how poorly editors can currently integrate with the language $\endgroup$ $\begingroup$ Another application which is related to optimisation is to automatically cleanup code, for example removing useless "\expandafter"s or similar. $\endgroup$ $\begingroup$ "complex graphics package" Of course, if you use tikz or pgf graphics, you can always externalize them and save a lot of time on builds when they don't change (which is a lot like incremental compilation, really). $\endgroup$ – JAB Thanks for contributing an answer to Theoretical Computer Science Stack Exchange! Not the answer you're looking for? Browse other questions tagged pl.programming-languages semantics term-rewriting-systems or ask your own question. Books on programming language semantics Formal Semantics of Programming Languages Is there any programming language theory describing foreign function interfaces (FFI) and multiple language bindings? Is there a survey of the semantics of various programming language features? How to define the formal and informal semantics of an algorithm as accurately as possible? Tool for specifying operational semantics for given formally specified programming language What requirements should a denotational semantics for a programming language satisfy to be correct? Programming language supporting infinitary rewriting of regular term graphs? Why is the multi-step reduction of semantics reflexive? What relations and differences are between formal semantics for linguistics and for programming languages?
CommonCrawl
Intratumoral 18F-FLT infusion in metabolic targeted radiotherapy Thititip Tippayamontri1,2,4, Brigitte Guérin1,3, René Ouellet3, Otman Sarrhini3, Jacques Rousseau3, Roger Lecomte1,3, Benoit Paquette1,2 & Léon Sanche1,2 EJNMMI Research volume 9, Article number: 33 (2019) Cite this article The goal of targeted radiotherapy (TRT) is to administer radionuclides to tumor cells, while limiting radiation exposure to normal tissues. 3′-Deoxy-3′-[18F]-fluorothymidine (18F-FLT) is able to target tumor cells and emits a positron with energy appropriate for local (~ 1 mm range) radiotherapy. In the present work, we investigated the potential of TRT with a local administration of 18F-FLT alone or in combination with 5-fluorouracil (5FU), which acts as a chemotherapeutic agent and radiosensitizer. Treatment efficiency of 18F-FLT combined or not with 5FU was evaluated by intratumoral (i.t.) infusion into subcutaneous HCT116 colorectal tumors implanted in nu/nu mice. The tumor uptake and kinetics of 18F-FLT were determined and compared to 2-deoxy-2-[18F]-fluoro-D-glucose (18F-FDG) by dynamic positron emission tomography (PET) imaging following i.t. injection. The therapeutic responses of 18F-FLT alone and with 5FU were evaluated and compared with 18F-FDG and external beam radiotherapy (EBRT). The level of prostaglandin E2 (PGE2) biosynthesis was measured by liquid chromatography/tandem mass spectrometry (LC/MS/MS) in order to determine the level of inflammation to healthy tissues surrounding the tumor, after i.t. injection of 18F-FLT, and compared to EBRT. We found that i.t. administration of 18F-FLT offers (1) the highest tumor-to-muscle uptake ratio not only in the injected tumor, but also in distant tumors, suggesting potential for concurrent metastases treatment and (2) a sixfold gain in radiotherapeutic efficacy in the primary tumor relative to EBRT, which can be further enhanced with concurrent i.t. administration of the radiosensitizer 5FU. While EBRT stimulated PGE2 production in peritumoral tissues, no significant increase of PGE2 was measured in this area following i.t. administration of 18F-FLT. Considering the biochemical stability of 18F-FLT and the physical properties of localized 18F, this study shows that TRT via intratumoral infusion of 18F-FLT and 5FU could provide a new effective treatment option for solid tumors. Using this approach in a colorectal tumor model, the tumor and its metastases could be efficiently irradiated locally with much lower doses absorbed by healthy tissues than with i.t. administration of 18F-FDG or conventional EBRT. Positron emission tomography (PET) is capable of assessing tumor proliferation quantitatively, non-invasively, and reproducibly [1]. Proliferation is a key feature of tumor progression and the principal mechanism underlying 3′-deoxy-3′-[18F]-fluorothymidine (18F-FLT) PET imaging [2, 3]. 18F-FLT crosses the cell membrane via specific nucleoside transporters [4]. The principal mechanism underlying 18F-FLT PET imaging is the uptake by proliferating cells of the tracer into the pyrimidine exogenous salvage pathway or endogenous de novo pathway. Aside from phosphorylation in proliferating cells, the compound is metabolically stable in vivo, 18F is not released from FLT and hence the free atom poses no risk of accumulation in sensitive tissues [5]. However, selective uptake of FLT in the bone marrow, a tissue with a high proliferative rate, has considerably limited investigations on its potential use as a theranostic agent administered intravenously (i.v.) [6]. Similar to thymidine, 18F-FLT once internalized into the cytoplasm is phosphorylated into 18F-FLT monophosphate by thymidine kinase 1 (TK1) that has the particularity to become trapped inside the cell without being incorporated into the DNA [7]. Alternatively, in the de novo synthesis pathway, deoxyuridine monophosphate is converted to thymidine monophosphate by thymidine synthase which can be phosphorylated and incorporated into DNA [8]. The accumulation of 18F-FLT is dependent on the presence of TK1, which is closely associated with cellular proliferation [7], and poor prognosis for cancer patients [9]. 18F-FLT has therefore the unique potential to preferably target this group of poor prognosis cancers. Preclinical and clinical studies have demonstrated a considerable interest in 18F-FLT as a PET tracer in breast, lung, and brain cancer imaging [4, 7, 10,11,12]. 18F-FLT PET has been previously shown to provide valuable information for response assessment of tumor therapies [3, 7], and it has found limited use for tumor therapy follow-up in clinical trials [8, 13]. 18F-FLT tumor uptake is generally lower than that of 2-deoxy-2-[18F]-fluoro-D-glucose (18F-FDG), but its selectivity for tumor versus inflammatory cells often makes it a better marker of tumor cells than the glucose analog, 18F-FDG [7]. Moreover, 18F-FLT showed low uptake in the brain and heart, in contrast to 18F-FDG, confirming its higher tumor specificity [14]. The only limitation remains the detection and measurement of bone tumors and metastases, due to the high 18F-FLT uptake in healthy bone marrow. Positron-labeled radionuclides are beginning to be widely used in combined diagnostic and therapeutic applications, which are also known as the theranostic feature of nuclear medicine [15]. The average kinetic energy of the positron emitted by 18F is about a quarter of that generated by its annihilation [11]. However, due to the physical properties of 18F radioactive decay, targeted radionuclide therapy (TRT) by emission of the positron is expected to be highly effective, since radiation damage from positrons can be localized within ~ 1 to 2 mm [11] from their source. On the other hand, the 511 keV photons from annihilation, which provide the tomographic images, have almost a three orders of magnitude longer penetration depth in tissues [16]. These physical characteristics imply that, in TRT, the dose densities absorbed by the body are negligible compared to that absorbed locally in the targeted cancer cells (i.e., radiation flux decreases as the square of the distance from the annihilation point), unless some tissues express a particularly high uptake, such as the bone marrow [6] after i.v. administration. Moadel et al. investigated cancer cell affinity of PET radiotracers. They have shown, for the first time at a cellular level, the radiotherapeutic potential of i.v. administration of 18F-FDG [17, 18]. Although 18F-FDG significantly slowed tumor growth and prolonged survival in comparison with non-treated animals, high cardiac and brain uptakes, as well as an important renal radiotoxicity, were observed, partially offsetting the benefits of this novel approach. This barrier to such a theranostic clinical application could be overcome to a large extent by 18F-FLT. The latter has been developed to accumulate specifically in tumors, proportionally to the proliferation rate of the active cells [2, 3], and showed low uptake in the brain and heart, compared to 18F-FDG [14]. Moreover, utilizing 18F-FLT in cancer treatment not only allows improving the determination of tumor prognosis and monitoring tumor response to anti-cancer treatment, but also may offer some advantages of a positron-labeled radiopharmaceutical as a therapeutic agent. Combined with i.t. injection, these benefits may partly offset the dose delivered to the bone marrow by 18F-FLT. Here, we investigate the potential of metabolic TRT with 18F-FLT in an animal model of human colon cancer derived from HCT116 cells, which are known to be highly proliferative, when implanted in immuno-deficient mice [19]. Considering that the therapeutic properties of 18F-FLT may be dependent on the mode of administration, we explore the biodistribution by PET and TRT of 18F-FLT in mice, after direct infusion of the radioactive compound in a primary tumor by convection-enhanced delivery (CED) [20]. 18F-FLT was administered directly into one tumor, while contralateral tumors simulated distant metastases. As a predictive tool in assessing radiotherapy efficacy, tumor uptake was monitored by PET following 18F-FLT and 18F-FDG i.t. infusion under identical conditions. The results were compared to those obtained from a single 15 Gy dose of external beam radiotherapy (EBRT), considered equivalent to the 25 fractions of 2 Gy [21] frequently used in cancer treatment. To highlight the full potential of i.t. 18F-FLT treatment, such comparisons were also made in combination with the chemotherapeutic agent 5FU. 18F-FDG and 18F-FLT 18F-FDG (CYCLODX, CIUSSS de l'Estrie - Centre Hospitalier Universitaire de Sherbrooke, Canada) and 18F-FLT were prepared by the Sherbrooke Molecular Imaging Center (CIMS, Sherbrooke, Quebec, Canada). 18F-FLT was produced using the protected nosylate precursor and the method of Yun et al. [22]. The HCT116 human colorectal carcinoma cell line obtained from ATCC was routinely cultured in modified Eagle's medium (Sigma-Aldrich, Oakville, Canada) supplemented with 10% fetal bovine serum, 2 mM glutamine,1 mM sodium pyruvate, 100 units/ml penicillin, and 100 μM streptomycin in a fully humidified incubator at 37 °C in an atmosphere containing 5% CO2. Human colorectal cancer xenograft mouse model Experiments were performed with outbred male nude mice at 4–6 weeks of age (Charles River Laboratories, Saint-Constant, QC, Canada). The animals were maintained in an animal facility, under specific pathogen-free conditions. Housing and all procedures involving animals were performed according to the protocol approved by the Université de Sherbrooke Animal Care and Use Committee (protocol number 235-14B). Human colorectal HCT116 tumor cells (2 × 106, 0.1 mL) were inoculated subcutaneously (s.c.) into each rear thigh and one on the right shoulder. During each animal handling implantation, the animals were anesthetized with an intraperitoneal injection of ketamine/xylazine (87/13 mg/mL) at 1 mL/kg. Tumor size measurements began 1-week post-injection and continued biweekly. Tumor volumes were calculated with the following formula: V (mm3) = π/6 × a (mm) × b2 (mm2), where a and b were the largest and smallest perpendicular tumor diameters, respectively. All experiments began when tumor volumes reached a diameter of about 5–7 mm. The tumor-bearing animals were randomized into different groups of two to four animals each. Distribution kinetics and tumor clearance of i.t. 18F-FLT and 18F-FDG The animals were anesthetized by inhalation of 1.5% isoflurane and 1.5 L/min oxygen during i.t. injection and PET imaging procedures. The single i.t. infusion of 5 MBq of 18F-FLT or 18F-FDG solution was applied into the tumor on one side of the rear thigh, whereas the contralateral tumors were not treated. The solution was introduced into the central section area of the tumor. For each injection, the needle tip placement was at approximately one-third depth in the tumor along with the needle insertion direction. The leakage of the radiolabeled compound is the main concern for i.t. injection. To avoid this complication, the i.t. infusion was performed at a slow infusion rate (10 μL/min) over 10 min and the needle was left in place within the tumor for about 5 min following completion of the i.t. infusion to reduce any backflow of the 18F-FLT or 18F-FDG solution. Total infusion volume for each tumor was limited to about 30–50% of the tumor volume from caliper measurements, which were determined on the day of the study. The administration of 18F-FLT or 18F-FDG by i.t. injection was performed with the animal placed inside the scanner, at the start of data acquisition at time 0. Dynamic PET data were acquired in list mode from time 0 to 120 min post-injection using the Triumph/LabPET8™ platform (Gamma Medica, Northridge, CA) at the CIMS. PET images were reconstructed on a 120 × 120 × 128 matrix with a 0.5 × 0.5 × 0.6 mm3 voxel size using the standard LabPET 3D maximum likelihood expectation maximization algorithm implementing a 3D model of the physical detector response. Frame durations for the reconstructed images were 10 × 1 min, 10 × 5 min, and 4 × 15 min. All PET images were corrected for physical radionuclide decay, dead time, and differences in crystal detection efficiency. To quantify the radiotracer uptake, regions of interest (ROI) were drawn around tumors, organs, and whole body in the last image frame using the Amide software [23]. These ROIs were then applied to all frames to obtain time-activity curves (TAC) for each organ. The ROI activity was expressed as percent injected dose per gram of tissue (%IA/g) with the whole body radioactivity measured by PET. The residency time (in hours) for each organ was calculated using decay-uncorrected TAC as follows [24]: $$ {\tau}_{\mathrm{h}}=\frac{\underset{0}{\overset{2\mathrm{h}}{\int }}\mathrm{TAC}(t) dt+\mathrm{TAC}\left(2\mathrm{h}\right)\underset{2\mathrm{h}}{\overset{\infty }{\int }}{e}^{-\lambda t} dt}{A_0} $$ where TAC(t) is the activity in the organ at time t, TAC(2 h) is the activity in the organ at the last time point of measurement (2 h), λ is the physical decay constant of 18F, and A0 is the injected activity in the main tumor. Trapezoidal rule was used to numerically integrate the organ TAC over the measurement time, while the analytical integration was performed on the exponential decay term. In order to assess the radiation burden of 18F-FLT to bone and bone marrow, the uptake of 18F-FLT following i.t. (n = 3) and i.v. (n = 1) administration of 18F-FLT was compared from PET acquisitions by tracing ROI on the forepaw long bones and on the spine. The relative exposure of the bone marrow was estimated from the area under the non-decay-corrected time-activity curves (AUC) extrapolated to infinity with the physical decay of 18F. Determination of the tumor response after i.t. infusion of 18F-FLT and 18F-FDG radiotherapy and EBRT In order to determine the dose dependence of tumor response, single i.t. injections of 15 and 25 MBq 18F-FLT or 18F-FDG were administered into the tumor on one side of the rear thigh, whereas the contralateral tumors were left untreated. All experiments began when tumor volumes reached a diameter of about 5–7 mm. External beam gamma radiation was performed with a 4C Gamma Knife (Elekta Instruments AB, Stockholm, Sweden). A single i.t. injection of 0.9% saline was administered into the tumor to emulate the radiotracer administration. Mice were anesthetized and positioned in our in-house stereotactic frame designed for the 4C Gamma Knife [25]. The radiation treatment (15 Gy, dose rate of 3.6 Gy/min) using 8-mm collimators was delivered at predetermined coordinates targeting the tumor. Radiation was applied to the tumor located on one side of the rear thigh, whereas the other side was kept as the non-irradiated control tumor. Tumor growth was measured after treatment twice a week. Tumor volumes were calculated as described in the mouse model section. Fivefold growth delay (5Td) was considered to be the time required for the tumor volume to increase by a factor of 5, compared to the initial volume at the beginning of treatment. Tumor growth delay (TGD) was calculated by subtracting the 5Td value of the treated group from the 5Td of the control group. An enhancement factor (EF) was also calculated by dividing the 5Td of the treated group by the 5Td of the non-treated control group. Assessment of absorbed dose from PET imaging by Fricke dosimetry The mean 18F-FDG activity in different tissues measured in our mouse model by PET imaging was correlated to the absorbed dose assessed in vitro by Fricke dosimetry [26, 27], as described by Tippayamontri et al. [28]. Briefly, the dose-response of the Fricke dosimeter and total activity measured by PET were determined at different times for a 3 mL Fricke solution and a 3 mL of deionized water that contained 60 MBq of 18F-FDG. The total absorbed dose in the Fricke solution was assessed at 1450 min. The dose was correlated to the activity measured by PET for the same solution. The characteristics of the LabPET8™ scanner (Gamma Medica) was previously described in [29]. During the calibration procedure, PET imaging was performed at time 0, 0.5, 1, 2, 3, and 4 h after adding 18F-FDG into the Fricke solution, with scanning times of 3, 5.12, 9.50, 16.24, and 30.06 min, respectively. The optical density and radioactivity were measured prior to and after the PET scans. CT imaging of the vials was performed for attenuation correction of the emission data. The raw data were reconstructed and corrected relative to the reconstructed resolution of the PET scanner. The decay, dead time, random subtraction, and differences in crystal detection efficiencies were also included in the correction factor. A region of interest (ROIs) analysis was carried out with the built-in function in the LabPET image analysis software. The radioactivity in the subject vial (Fisherbrand 15 × 45 mm, 1DR, Fisher Scientific) was obtained as cps/mL from reconstructed PET images. The relationship of absorbed dose (Gy) and time-integrated activity (MBq.h) with administered activity (MBq) is shown in the Additional file 1: Figure S1 and Additional file 2: Figure S2, respectively. To obtain quantitative radioactivity data with mice, the PET system was calibrated by acquiring data from a mouse phantom filled with an 18F-FDG solution of known radioactivity. Thus, the pixel counts of the PET image in cps/mL could be converted into the activity concentration (MBq/mL) by multiplying the ROIs with known added activity of 18F-FDG. Total accumulated absorbed dose in the tumor tissue and normal organs can be calculated by following Eq. 1. $$ D\left(\mathrm{Gy}\right)=\breve{A}\left(\mathrm{MBq}.\mathrm{h}/\mathrm{g}\right)\times M\ \left(\mathrm{g}\right)\times C\left(\mathrm{Gy}/\mathrm{MBq}.\mathrm{h}\right) $$ Ă is the time-integrated activity per gram of tissue (MBq.h/g) M is the tissue mass (g) D is the absorbed dose (Gy) C is the conversion factor of 0.09 Gy/MBq.h, derived from the relationship between Fricke dosimetry and PET imaging (Additional file 3: Conversion factor for absorbed dose estimated by the Fricke chemical primary standard dosimeter). Prostaglandin E2 quantification by liquid chromatography/tandem mass spectrometry PGE2 has been quantified to assess inflammation [30]. Muscle tissues nearby the irradiated area were extracted and snapped frozen with liquid nitrogen after 4 h of either i.t. injection of 5 MBq 18F-FLT or 15 Gy gamma irradiation. Tissues were homogenized with a Dounce homogenizer in 2 mL of acetone-saline solution (2:1), containing 10 ng of the internal standard prostaglandin E2d4 (PGE2-d4), which contains four deuterium atoms at the 3, 3′,4, and 4′ positions (internal standard, Cayman Chemical, Ann Arbor, MI, USA) and 0.05% butylated hydroxytoluene to prevent the oxidation of prostanoids. The homogenate was transferred to a screw-top tube, vortexed for 1 min, and centrifuged (10 min, 1800 g, room temperature). The supernatant was transferred to another tube and mixed with 2 mL hexane by vortexing for 1 min. After centrifugation (10 min, 1800 g, room temperature), the upper phase containing lipids was discarded. The lower phase was acidified with 30 μL of 2 M formic acid and then 2 mL of chloroform containing 0.05% butylated hydroxytoluene were added. The mixture was vortexed and again centrifuged (10 min, 1800 g, room temperature) to separate the two phases. The lower phase containing chloroform was transferred to a conical centrifuge tube for evaporation with a SpeedVac Concentrator (Sarant, Nepean, ON, Canada). Samples were reconstituted in 100 μL methanol:10 mM ammonium acetate buffer, pH 8.5 (70:30), and filtered with Spin-X centrifuge tube filter 0.45 μm (10 min, 1300 g, room temperature). Samples were stored at − 20 °C for later analyses. PGE2 was quantified by LC/MS/MS using the same procedure as reported by Desmarais et al. [31]. Briefly, the apparatus consisted of an API 3000 mass spectrometer (Applied Biosystem, Streesville, ON, Canada) equipped with a Sciex turbo ion spray (AB Sciex, Concord, ON, Canada) and a Shimadzu pump and controller (Columbia, MD, USA). Prostaglandins were chromatographically resolved using a Kromasil column 100-3.5C18, 150 × 2.1 mm (EKa Chemicals, Valleyfields, QC, Canada). A linear acetonitrile gradient from 45 to 90% during 12 min at a flow rate of 200 μL/min was used. The mobile phase consisted of water buffered with 0.05% acetic acid and acetonitrile 90% with acetic acid 0.05%. The injection volume was 10 μL per sample, which were kept at 4 °C during analysis. Individual products were detected using negative ionization and the monitoring of the transition m/z 351 ➔ 271 for PGE2 and 355 ➔ 275 for PGE2d4 with a collision voltage of − 25 V. For quantification of specific ions, the area under the curves was measured. Mitotic activity assessed by immunohistochemistry of Ki67 Animals were euthanized 4 h after combined i.t. treatment with 5FU and 5 MBq 18F-FLT or 15 Gy gamma irradiation. Tumor samples were removed and fixed in 10% buffered formalin. The 5-μm sections from paraffin-embedded blocks were stained with conventional hematoxylin-eosin. For Ki67 staining, 5-μm sections from paraffin-embedded blocks were deparaffinized in xylene, rehydrated using graded alcohol, and washed with PBS buffer (pH 7.4). For antigen retrieval, sections were placed in 0.01 M sodium citrate buffer (pH 6.0) for 10 min inside a steamer cooker. Sections were cooled to room temperature and washed with PBS buffer. Endogenous peroxide was blocked by 3% H2O2 for 15 min. Sections were incubated in 10% bovine serum albumin (BSA) for 1 h at room temperature. Thereafter, sections were incubated in overnight at 4 °C with the primary mouse monoclonal antibody Ki67 diluted in 0.5% BSA (PM375 AA Biocare Medical, Concord, California, USA). Sections were treated with the second anti-rabbit antibody (PM375 AA Biocare Medical, Concord, California, USA) diluted in 0.5% BSA for 1 h at room temperature. Diaminobenzidine tetrahydrochloride (0.6 mg/mL in Tris buffer saline, pH 7.6 containing 0.04% hydrogen peroxide) was used to develop the brown color. Methyl green was used to counterstain the slides. A negative control (with primary antibody omitted) was taken along with each batch. Counting of Ki67-positive cells was carried out in ten consecutive fields of 20×. The Ki67 index was estimated by the percentage of Ki67-positive cells in all the counted tumor cells. All statistical analyses were performed using Prism 7.03 for Windows (GraphPad software). All results are reported as mean ± SD. The number of animals ranged from 2 to 4: (control untreated, i.t. 5FU, 15 Gy EBRT, i.t. 5FU + 15 Gy, i.t. 5FU + i.t. 18F-FLT 15 MBq, i.t. 5FU + i.t. 18F-FDG 15 MBq, n = 4), (PET i.t. 18F-FLT, i.t. FLT, i.t. 18F-FDG 15 MBq, untreated distant tumor (i.t. 18F-FLT 10 MBq), n = 3), and (PET i.t. 18F-FDG, i.t. 18F-FLT 15 MBq, i.t. 18F-FLT 25 MBq, n = 2). Statistical analyses were performed as described in the figures and table legends. Ordinary one-way ANOVA with a Dunnett's multiple comparisons test was used to compare the residence time of 18F-FLT in the infused tumor to that of non-target tissues and to compare the 5Td of the non-treated animal to that of the different experimental groups. Ordinary one-way ANOVA with a Tukey's multiple comparison test was used to compare the 5TD values of the (i.t. 5FU + 15 Gy EBRT) to those of the (i.t. 5FU + i.t. 18F-FLT 15 MBq) and (i.t. 5FU + i.t. 18F-FDG 15 MBq) groups. Differences were considered statistically significant at p ≤ 0.05. Distribution kinetics and clearance of i.t. 18F-FLT and 18F-FDG PET images of mice having received 5 MBq i.t. infusion over 10 min of 18F-FLT or 18F-FDG are displayed in Fig. 1. Comparison of the two sets of figures clearly shows that while primary tumor uptakes are similar for both tracers, uptakes in healthy organs are significantly reduced by i.t. administration of 18F-FLT. It is also worth noting that 18F-FLT activity grows steadily in the distant contralateral (non-injected) tumors, while that in the i.t. injected tumor decreases (Fig. 1). This is in contrast to 18F-FDG activity, which is eliminated from the primary tumor to multiple other tissues, without significant apparent accumulation in the contralateral tumor. Kinetics of 18F-FLT (top) and 18F-FDG (bottom) distribution up to 120 min after i.t infusion of 5 MBq in the primary tumor (pt) of nude mice bearing the colorectal HCT116 tumor as assessed by PET imaging. Tumors were implanted on each thigh and additionally on the right shoulder solely for animals injected with 18F-FLT. 18F-FLT uptake increased steadily in the distant contralateral (ct) and shoulder tumors (st) as it decreased in the i.t. injected tumor, in contrast to 18F-FDG that mostly diffused in the heart (he), brain (br), and kidneys (ki). Lower accumulation in healthy tissues was obtained with i.t. 18F-FLT. lu denotes the lung and li denotes the liver Radiotracer uptake in the i.t. injected tumor reaches a maximum of about 10 min after the beginning of the infusion and drops asymptotically to less than 10% of the maximum after 120 min (Fig. 2). At this time, contralateral proximal and distant tumors showed similar 18F-FLT uptakes of ~ 8%IA/g. The 18F-FDG kinetics follow a different trend in the contralateral tumor, reaching a plateau at 40 min and a tumor retention only about one fifth that of 18F-FLT at 120 min (Fig. 2). As observed in Fig. 1, time-activity curves corroborate the significantly lower and slower uptakes in the kidneys, brain, bone, and heart with i.t. infusion of 18F-FLT compared to 18F-FDG (Fig. 3). Tumor uptake and clearance of 18F-FLT (solid line) and 18F-FDG (dashed line) in mice bearing HCT116 human colorectal tumors after i.t. infusion of 5 MBq into the left rear flank tumor. The radiotracer kinetics was measured during 2 h by dynamic PET imaging. Data are expressed as the percent injected activity per gram for tissue (%IA/g). All data were corrected for physical decay of the isotope Distribution kinetics and clearance of 18F-FLT (solid line) and 18F-FDG (dashed line) in mice bearing HCT116 human colorectal tumors after i.t. infusion of 5 MBq into the rear flank tumor. The radiotracer biodistribution kinetics were measured during 2 h by dynamic PET imaging. Data are expressed as the percent injected dose per gram of tissue (%IA/g). All data were corrected for physical decay of the isotope The residence time of 18F-FLT and 18F-FDG in each tissue is displayed in Fig. 4. For 18F-FLT, the contralateral tumor residence time is significantly higher than those of the muscle, bone, and brain. By contrast, the largest residence time in non-target tissues for 18F-FDG was found in the brain, bone, kidneys, and heart. Residence time of 18F-FLT and 18F-FDG in each tissue. Residence times were calculated as areas under the time-activity curves shown in Figs. 2 and 3 extrapolated to infinity with the physical decay of 18F. The horizontal bars stand for the mean The time-activity curves for i.t. and i.v. administration of 18F-FLT are presented in Fig. 5 for the data not corrected for radioactive decay. Bone uptake is much slower following i.t. administration as compared to i.v., although both time-activity curves converged at 120 min post administration. The extrapolated AUC of these time-activity curves indicate a trend towards lower radiation burden, although there was no statistically significant difference (one sample t test) between the AUC (i.t. 5.12, sdm 1.37, n = 3; i.v. 6.7, n = 1) obtained with either methods of administration. Bone uptake of [18F]-FLT following i.v. (full circles) or i.t. (solid line with open circles, n=3) administration. Data obtained from ROI traced on the forepaw bones and spine. The data are not corrected for 18F decay Tumor response after local 18F-FLT i.t. infusion The therapeutic efficacy in animals treated with 15 and 25 MBq of i.t. 18F-FLT was compared to non-irradiated controls and (as a reference) to radiation delivered by a single 15 Gy dose EBRT (Fig. 6a). The therapeutic effect of 18F-FLT clearly derives from radiation damage as no reduction in tumor growth was found when non-radioactive FLT was injected (Fig. 6a). However, a significant improvement in tumor response was observed when using i.t. infusion of 18F-FLT, as evidenced in Table 1 by an increase in TGD and the EF. It is noteworthy that similar EFs were found with 15 MBq i.t. 18F-FLT (EF = 1.8), as with 15 Gy EBRT (EF = 1.7). Treatment effect can even be further enhanced with 25 MBq i.t. 18F-FLT (EF = 2.1, Table 1), at a dose still significantly lower than conventional radiotherapy. Furthermore, the unexpected uptake of 18F-FLT in the contralateral tumors with i.t. administration may be responsible for the slower growth rate compared to the control (Fig. 6b; EF = 1.3, Table 1), suggesting a potential role of i.t. 18F-FLT metabolic TRT in controlling distant metastases. The time-integrated activity was extracted specifically from the individual tumor tissues as well as normal organs. This therefore allows us to estimate the dose delivered to each tissue/organs from the 18F-radionuclide (Table 2), which can be further compared to the dose of 15 Gy delivered to the tumor by EBRT with the gamma knife. Animals treated with 15 MBq i.t. 18F-FLT received a lower radiation dose, for the same tumor treatment efficiency as with EBRT (Fig. 6a), while maintaining a low radiation exposure of healthy tissues (Fig. 3). Tumor growth profiles after different treatment conditions in HCT116 colorectal xenografts compared to controls without treatment. The curves are normalized to the tumor volume at the beginning of follow up with a 15 Gy of external radiation and with different activities of i.t. injected 18F-FLT, b contralateral tumor after i.t. injected 18F-FLT, c combined treatment of i.t. 5FU and i.t. 18F-FLT or 15 Gy EBRT, and d combined treatment of i.t. 5FU and i.t. 18F-FDG or 15 Gy EBRT. The error bars represent the standard deviation on the mean for two to four animals Table 1 Therapeutic efficacy of 18F-metabolic TRT or EBRT, alone, or combined with i.t. 5FU in the HCT116 human colorectal cancer xenografts Table 2 Mean absorbed dose of 18F-FDG in different tissue/organs after i.t. 18F-FDG estimated by using the Fricke chemical primary standard dosimeter A single i.t. infusion of 5FU (2.5 mg/kg), 4 h prior to 18F-radiotherapy, results in a 2.1-fold better tumor growth inhibition than local radiotherapy alone (Table 1 and Fig. 6c, d). It is worth noting that treatment responses upon a combination of 5FU with 15 MBq i.t. 18F-FLT (EF = 3.7, Table 1) or 18F-FDG (EF = 3.4) are quite similar as might be predicted by their tumor uptake measured by PET. Concomitant administration of 5FU with 15 MBq i.t. 18F-FLT is noticeably more efficient (p ≤ 0.01) than when 5FU is combined with 15 Gy EBRT (EF = 3.0, p ≤ 0.05, Table 1). Inflammation response after i.t. 18F-FLT infusion PGE2 biosynthesis onto muscle tissues nearby the irradiated tumor areas was accessed 4 h post-treatment by LC/MS/MS (Fig. 7). After a single i.t. injection of 18F-FLT, the level of PGE2 was similar to that of the non-irradiated control group and about 2.5-fold lower than that measured in muscles tissues adjacent to the tumor having received a single irradiation of 15 Gy by EBRT. LC/MS/MS quantification of PGE2 in muscle tissues nearby irradiated tumors. Tumors received either 15 Gy gamma irradiation or i.t. injection of 5 MBq 18F-FLT. The animals were euthanized 4 h post-treatment. ns: p > 0.05; *p ≤ 0.05 Tumor cell proliferation: Ki67 immunohistochemistry analysis To further characterize the tumor response after combined i.t. 5FU with either 15 Gy EBRT or i.t. 5 MBq 18F-FLT in HCT116 xenograft tumors, the number of proliferating tumor cells was assessed by immunohistochemistry staining for Ki67 (Fig. 8). Tumors treated with 15 Gy EBRT alone (23.3 ± 6.2) or 18F-FLT alone (20.2 ± 3.4) showed a significant decrease in Ki67-positive cells compared to the control group (40.9 ± 7.9, p = 0.001). The proliferation index was further decreased when combining 15 Gy EBRT or i.t. 18F-FLT with i.t. 5FU radiosensitizers (12.8 ± 2.6 and 7.4 ± 1.7, respectively, p = 0.001). Immunohistochemical staining showing Ki67-based proliferation measured after treatment with i.t. 18F-FLT or 15 Gy EBRT, which were also combined with i.t. 5FU. ns: p > 0.05; *p ≤ 0.05; **p ≤ 0.01; ****p ≤ 0.0001 The ratios in Table 3 provide a comparison of the radiospecificity of 18F-FLT and 18F-FDG relative to the kidneys, liver, brain, and heart. The distribution of 18F-FLT after i.t. injection could be followed by PET imaging in the primary tumor and the contralateral tumors, which simulated distant metastasis sites. We demonstrated that while uptakes were similar in the injected tumor following the i.t. 18F-FLT and 18F-FDG infusion, only 18F-FLT provided increased accumulation and residence time in distant tumors, coupled with low and slow uptakes in non-targeted organs. 18F-FLT demonstrated a higher specificity for the detection of primary and contralateral tumors than 18F-FDG following i.t. administration. Indeed, tumor-to-normal tissue ratios for the primary tumor were generally more favorable after i.t. infusion of 18F-FLT than with 18F-FDG (Table 3). More importantly, all tumor-to-tissue ratios for the contralateral tumors increased over time and were always superior to those of i.t. 18F-FDG, confirming a higher tumor specificity for 18F-FLT. One should note that the contralateral tumor 18F-FLT residence time, which is predictive of the therapy tumor-absorbed dose, significantly exceeded that of the muscle (p = 0.0001), brain (p ≤ 0.0001), and bone (p = 0.0034); it was also higher than that of the liver and heart, although the difference is not statistically significant for these tissues (Fig. 4). By contrast, the largest residence time for 18F-FDG was found in the brain, bone, kidneys, and heart, which are all particularly sensitive to ionizing radiation. Table 3 Tumor-to-tissue ratios of 18F-FLT and 18F-FDG at three different times post injection after i.t. infusion of 5 MBq into a single tumor As expected, 18F-FLT bone uptake was lower at early times following i.t. infusion and similar to that of i.v. administration 2 h after (Fig. 5). However, the difference in radiation exposure as estimated from the area under the extrapolated curves is not statistically significant. Considering the low statistics involved, further studies are needed to evaluate the risk of radiation exposure of the bone marrow following metabolic TRT with 18F-FLT. Currently, there is a growing interest in the use of nuclear medicine compounds not only for qualitative and quantitative evaluation of oncological pathologies, but also as treating agents [32]. The present investigations highlight the benefits of PET imaging to predict and measure the 18F-FLT treatment efficacy. The treatment with i.t. administration of 18F-FLT shows several key advantages compared to EBRT. The local radiation exposure associated with 18F-FLT arises from slowing down of the 18F-emitted positrons, with a mean energy of 250 keV, resulting in an average range in tissue of ~ 1 to 2 mm [11]. This latter property makes positron emission particularly suitable for local irradiation in the tumor tissue, while preventing damage to healthy tissues. The positrons lose their kinetic energy in tissue in the same manner as electrons [17], producing along their paths a considerable density of highly reactive ions and secondary low energy electrons [33]. In addition, "cross-fire" and "bystander" effects could potentiate tumor damage by such short-range positrons [34]. These unique properties of TRT may contribute to overcome the radio-resistance of solid tumors containing oxygen-deficient hypoxic, as higher radiation dose can potentially be delivered to tumor cells, while still preserving peritumoral healthy tissues. The risk of radiation-induced late normal tissue injury, caused by chronic oxidative stress or inflammation, limits the dose of radiation that can be delivered safely to cancer patients [35]. In this study, we observed a low accumulation of i.t. administered 18F-FLT in healthy tissues and a low level of PGE2 and inflammation to healthy tissues surrounding the tumor. These data suggest that TRT with this radio-ligand has less off-target effects, making it more attractive for cancer radiotherapy. Moreover, i.t. infusion of 18F-FLT may allow higher doses of radiation to be locally delivered to target tissues than with EBRT, where the typical dose is in the range from 60 to 80 Gy [36]. The reported high selective uptake of 18F-FLT in the bone marrow represents a challenge for TRT using this radiopharmaceutical [6]. However, our results show that with an i.t. protocol, the radiation exposure is of the same order as for other sensitive tissues such as the brain, heart, and liver. It is also noteworthy that the radiation burden to bone and bone marrow is significantly lower than for i.t. administration of 18F-FDG. Previous studies have demonstrated the therapeutic potential of positrons in the colon, breast carcinoma, and lung metastases upon i.v. or intraperitoneal (i.p.) injection of 18F-FDG [17, 37,38,39]. Fang et al. reported an improvement of tumor response in a mouse model, compared to a control group, in colon cancer treated with 55.5 to 222 MBq of 18F-FDG via i.p. administration. However, no significant difference between treated groups was observed [38]. In the present study, the dose range of 18F-FLT used in i.t. injection was an order of magnitude lower than in these earlier studies. Nevertheless, a significant tumor growth delay relative to the control group was observed after i.t. injection of 15 MBq 18F-FLT. Similar TGDs and EFs were observed for tumors treated with 15 MBq of 18F-FLT and those receiving 15 Gy EBRT. Despite a similar response to treatment, it is somewhat difficult to correlate these two results together. Our experiment with the Fricke dosimeter demonstrated that the radiation dose delivered to the tumor from the i.t. injection of 15 MBq of 18F-FLT (assuming that all the radiation energy of 18F-FLT remains in the tumor) is received lower than that from EBRT for the same biological effect. Therefore, our results suggest that a therapeutic benefit from direct exposure to positron emitting agents can be achieved with local 18F-radiotherapy. TRT can be enhanced even further by the concurrent administration of the radiosensitizing agent 5FU [39]. Tumor cell proliferation assessed by Ki67 labeling index was evaluated based on previous studies [40] at 4 h post-combined treatment of i.t. 5FU with either i.t. injection of 18F-FLT or 15 Gy EBRT (Fig. 8). The concurrent i.t. administration of the chemotherapeutic agent 5FU with 18F-FLT reduced considerably tumor cell proliferation induced by 18F-FLT even at a suboptimal 5 MBq dose, as compared to the combined treatment of i.t. 5FU with 15 Gy radiation. Moreover, this combined treatment did not induce any body weight loss in mice at any doses (Table 4), indicating that it was well tolerated. Thus, adding 5FU to 18F-FLT could considerably enhance TGD and EF, without significantly increasing damage to healthy tissues, including the bone marrow. Table 4 Change in body weight of nude mice after different treatment conditions Since local tumor recurrence and metastases are a major concern in cancer treatment, the fast transfer of 18F-FLT from the treated tumor to non-injected tumors may be a significant benefit after i.t. infusion (Fig. 1) via convection-enhanced delivery (CED) [20]. This is indicative of fast distribution and circulation rates for 18F-FLT through tumor vascularization and blood circulation. More generally, CED of radiopharmaceuticals in combination with radiosensitizers, such as 18F-FLT and 5FU, may provide a new effective treatment option for localized tumors and their metastases. Moreover, considering the advances in TRT [41, 42], other radionuclides bound to molecules capable of reaching preferentially cancer cells could be injected directly into the primary tumor by CED, which is a clinical practice being increasingly applied to chemotherapy [20]. Recent developments in targeted PET imaging based on metabolism, angiogenesis, receptor-mediated antibodies, etc., may offer promising theranostic options, as already pointed out by others [43,44,45]. As in the present work, combining CED to the increased cancer cell specificity of these tracers relative to 18F-FDG could widen the range of PET radiopharmaceuticals potentially useful for therapy of not only well-localized tumors, but also their metastases. Collectively, the results obtained in this study indicate that i.t. administration of 18F-FLT may have definitive advantages for metabolic targeted radiotherapy (TRT). Compared to i.t. administration of 18F-FDG, tumor response to therapy can be slightly enhanced without detrimental consequences for most sensitive vital organs such as the brain, heart, and kidneys. While the radiation exposure of the bone marrow from 18F-FLT leaking from the tumor is a concern, it remains below the exposure from 18F-FDG. In either case, the i.t. administration concept for metabolic TRT of primary tumors has been established. Compared to external beam radiotherapy, metabolic TRT by i.t. infusion of 18F-FLT provides a sixfold gain in radiotherapeutic efficacy with less secondary effects to surrounding healthy tissues. Tumor response is further enhanced by the synergetic combination of 18F-FLT and the chemotherapeutic agent 5FU. Moreover, since i.t. 18F-FLT administration to a primary tumor provides significant uptake and residence time in distant tumors, it has the potential of either controlling or slowing the growth of metastases. Finally, PET imaging of positron-emitting radiolabeled compounds used for metabolic TRT allows direct visualization of the radiation source distribution and estimation of the local dose distribution during the entire course of treatment. 18F-FDG: 2-Deoxy-2-[18F]-fluoro-D-glucose 18F-FLT: 3′-Deoxy-3′-[18F]-fluorothymidine 5FU: 5-Fluorouracil 5Td: Five times tumor growth delay CED: Convection-enhanced delivery EBRT: EF: Enhancement factor i.p. : Intraperitoneal i.t. : Intratumoral i.v. : Liquid chromatography/tandem mass spectrometry ROI: TAC: Time-activity curve TGD: Tumor growth delay TK1: Thymidine kinase 1 TRT: Targeted radiotherapy Herschman HR, MacLaren DC, Iyer M, Namavari M, Bobinski K, Green LA, et al. Seeing is believing: non-invasive, quantitative and repetitive imaging of reporter gene expression in living animals, using positron emission tomography. J Neurosci Res. 2000;59(6):699–705. Shields AF. PET imaging with 18F-FLT and thymidine analogs: promise and pitfalls. J Nucl Med. 2003;44(9):1432–4. Been LB, Suurmeijer AJ, Cobben DC, Jager PL, Hoekstra HJ, Elsinga PH. [18F]FLT-PET in oncology: current status and opportunities. Eur J Nucl Med Mol Imaging. 2004;31(12):1659–72. Peck M, Pollack HA, Friesen A, Muzi M, Shoner SC, Shankland EG, et al. Applications of PET imaging with the proliferation marker [18F]-FLT. Q J Nucl Med Mol Imaging. 2015;59(1):95–104. Leung K. Molecular Imaging and Contrast Agent Database (MICAD) 3'-Deoxy-3'-[18F]fluorothymidine, https://www.ncbi.nlm.nih.gov/books/NBK23373/. Shields AF, Grierson JR, Dohmen BM, Machulla HJ, Stayanoff JC, Lawhorn-Crews JM, et al. Imaging proliferation in vivo with [F-18]FLT and positron emission tomography. Nat Med. 1998;4(11):1334–6. McKinley ET, Watchmaker JM, Chakravarthy AB, Meyerhardt JA, Engelman JA, Walker RC, et al. [18F]-FLT PET to predict early response to neoadjuvant therapy in KRAS wild-type rectal cancer: a pilot study. Ann Nucl Med. 2015;29(6):535–42. McKinley ET, Ayers GD, Smith RA, Saleh SA, Zhao P, Washington MK, et al. Limits of [18F]-FLT PET as a biomarker of proliferation in oncology. PLoS One. 2013;8(3):e58938. Xu Y, Shi QL, Ma H, Zhou H, Lu Z, Yu B, et al. High thymidine kinase 1 (TK1) expression is a predictor of poor survival in patients with pT1 of lung adenocarcinoma. Tumour Biol. 2012;33(2):475–83. Hoyng LL, Frings V, Hoekstra OS, Kenny LM, Aboagye EO, Boellaard R. Metabolically active tumour volume segmentation from dynamic [18F]FLT PET studies in non-small cell lung cancer. EJNMMI Res. 2015;5:26. Woolf DK, Beresford M, Li SP, Dowsett M, Sanghera B, Wong WL, et al. Evaluation of FLT-PET-CT as an imaging biomarker of proliferation in primary breast cancer. Br J Cancer. 2014;110(12):2847–54. Backes H, Ullrich R, Neumaier B, Kracht L, Wienhard K, Jacobs AH. Noninvasive quantification of [18F]-FLT human brain PET for the assessment of tumour proliferation in patients with high-grade glioma. Eur J Nucl Med Mol Imaging. 2009;36(12):1960–7. Sanghera B, Wong WL, Sonoda LI, Beynon G, Makris A, Woolf D, et al. FLT PET-CT in evaluation of treatment response. Indian J Nucl Med. 2014;29(2):65–73. van Waarde A, Cobben DC, Suurmeijer AJ, Maas B, Vaalburg W, de Vries EF, et al. Selectivity of 18F-FLT and 18F-FDG form differentiating tumor from inflammation in a rodent model. J Nucl Med. 2004;45(4):695–700. Velikyan I. Molecular imaging and radiotherapy: theranostics for personalized patient management. Theranostics. 2012;2(5):424–6. Johns HE, Cunningham JR. The physics of radiology (4th edition) Charles C Thomas, Springfield 1983. Moadel RM, Nguyen AV, Lin EY, Lu P, Mani J, Blaufox MD, et al. Positron emission tomography agent 2-deoxy-2-[18F]fluoro-D-glucose has a therapeutic potential in breast cancer. Breast Cancer Res. 2003;5(6):R199–205. Moadel RM, Weldon RH, Katz EB, Lu P, Mani J, Stahl M, et al. Positherapy: targeted nuclear therapy of breast cancer with 18F-2-deoxy-2-fluoro-D-glucose. Cancer Res. 2005;65(3):698–702. Morton CL, Houghton PJ. Establishment of human tumor xenografts in immunodeficient mice. Nat Protoc. 2007;2(2):247–50. Jahangiri A, Chin AT, Flanigan PM, Chen R, Bankiewicz K, Aghi MK. Convection-enhanced delivery in glioblastoma: a review of preclinical and clinical studies. J Neurosurg. 2017;126(1):191–200. Charest G, Sanche L, Fortin D, Mathieu D, Paquette B. Glioblastoma treatment: bypassing the toxicity of platinum compounds by using liposomal formulation and increasing treatment efficiency with concomitant radiotherapy. Int J Radiat Oncol Biol Phys. 2012;84(1):244–9. Yun M, Oh SJ, Ha HJ, Ryu JS, Moon DH. High radiochemical yield synthesis of 3′-deoxy-3′-[18F] fluorothymidine using (5′-O-dimethoxytrityl-2′-deoxy-3′-O-nosyl-β-D-threopentofuranosyl) thymine and its 3-N-BOC-protected analogue as a labeling precursor. Nucl Med Biol. 2003;30(2):151–7. Loening AM, Gambhir SS. AMIDE: a free software tool for multimodality image analysis. Mol Imaging. 2003;2(3):131–7. Kaushik A, Jaimini A, Tripathi M, D'Souza M, Sharma R, Mondal A, et al. Estimation of radiation dose to patients from 18FDG whole body PET/CT investigations using dynamic PET scan protocol. Indian J Med Res. 2015;142(6):721–31. Charest G, Mathieu D, Lepage M, Fortin D, Paquette B, Sanche L. Polymer gel in rat skull to assess the accuracy of a new rat stereotactic device for use with the gamma knife. Acta Neurochir. 2009;151(6):677–83. Fricke H, Morse S. The chemical action of roentgen rays on dilute ferrosulphate solutions as a measure of dose. Am J Roent Radium Ther Nucl Med. 1927;18:430–2. Matthews RW. Aqueous chemical dosimetry. Int J Appl Radiat Isot. 1982;33(11):1159–70. Tippayamontri T, Betancourt-Santander E, Guérin B, Lecomte R, Paquette B, Sanche L. Utilization of the ferrous (Fricke) dosimeter for evaluating the radiation absorbed dose of [18F]-FDG PET radiotracer. (To be published). Bergeron M, Cadorette J, Tétrault MA, Beaudoin JF, Leroux JD, Fontaine R, et al. Imaging performance of LabPET APD-based digital PET scanners for pre-clinical research. Phys Med Biol. 2014;59:661–78. Nakanishi M, Rosenberg DW. Multifaceted roles of PGE2 in inflammation and cancer. Semin Immunopathol. 2013;35(2):123–37. Desmarais G, Fortin D, Bujold R, Wagner R, Mathieu D, Paquette B. Infiltration of glioma cells in brain parenchyma stimulated by radiation in the F98/Fischer rat model. Int J Radiat Biol. 2012;88(8):565–74. Gallivanone F, Valente M, Savi A, Canevari C, Castiglioni I. Targeted radionuclide therapy: frontiers in theranostics. Front Biosci (Landmark Ed). 2017;22:1750–9. Alizadeh E, Sanche L. Precursors of solvated electrons in radiation biology. Chem Rev. 2012;112(11):5578–602. Brady D, O'Sullivan JM, Prise KM. What is the role of the bystander response in radionuclide therapies? Front Oncol. 2013;3:215. Zhao W, Robbins ME. Inflammation and chronic oxidative stress in radiation-induced late normal tissue injury: therapeutic implications. Curr Med Chem. 2009;16(2):130–43. Knottenbelt DC, Snalune K, Kane JP. "Clinical Equine Oncology", Published by Elsevier Health Sciences. 2015. Caridad V, Arsenak M, Abad MJ, Martín R, Guillén N, Colmenter LF, et al. Effective radiotherapy of primary tumors and metastasis with 18F-2-deoxy-2-fluoro-D-glucose in C57BL/6 mice. Cancer Biother Radiopharm. 2008;23(3):371–5. Fang S, Wang J, Jiang H, Zhang Y, Xi W, Zhao C, et al. Experimental study on the therapeutic effect of positron emission tomography agent [18F]-labeled 2-deoxy-2-fluoro-d-glucose in a colon cancer mouse model. Cancer Biother Radiopharm. 2010;25(6):733–40. Rosenthal DI, Catalano PJ, Haller DG, Landry JC, Sigurdson ER, Spitz FR, et al. Phase I study of preoperative radiation therapy with concurrent infusional 5-fluorouracil and oxaliplatin followed by surgery and postoperative 5-fluorouracil plus leucovorin for T3/T4 rectal adenocarcinoma: ECOG E1297. Int J Radiat Oncol Biol Phys. 2008;72(1):108–13. Donizy P, Kaczorowski M, Leskiewicz M, Zietek M, Pieniazek M, Kozyra C, et al. Mitotic rate is a more reliable unfavorable prognosticator than ulceration for early cutaneous melanoma: a 5-year survival analysis. Oncol Rep. 2014;32(6):2735–43. Reilly RM. Monoclonal antibody and peptide-targeted radiotherapy of cancer. Hoboken, New Jersey: Published by John Wiley & Sons, Inc; 2010. Begg AC, Stewart FA, Vens C. Strategies to improve radiotherapy with targeted drugs. Nat Rev Cancer. 2011;11(4):239–53. Jaini S, Dadachova E. FDG for therapy of metabolically active tumors. Semin Nucl Med. 2012;42(3):185–9. Vallabhajosula S, Solnes L, Vallabhajosula B. A broad overview of positron emission tomography radiopharmaceuticals and clinical applications: what is new? Semin Nucl Med. 2011;41(4):246–64. Rice SL, Roney CA, Daumar P, Lewis JS. The next generation of positron emission tomography radiopharmaceuticals in oncology. Semin Nucl Med. 2011;41(4):265–82. We would like to thank the staff of the CIMS for excellent technical assistance. We thank Dr. Andrew Bass for fruitful discussions and valuable comments on this manuscript. This work was supported by the Canadian Institutes of Health Research (Grant # MOP-81356) and the Centre de Recherche du Centre Hospitalier Universitaire de Sherbrooke funded by the Fonds de recherche du Québec - Santé (FRQS). BG is holder of the Jeanne and J.-Louis Lévesque Chair in Radiobiology at Université de Sherbrooke. Please contact the author for data requests. Department of Nuclear Medicine and Radiobiology, Faculty of Medicine and Health Sciences, Université de Sherbrooke, Sherbrooke, QC, Canada Thititip Tippayamontri, Brigitte Guérin, Roger Lecomte, Benoit Paquette & Léon Sanche Center of Radiotherapy Research, Faculty of Medicine and Health Sciences, Universite de Sherbrooke, Sherbrooke, QC, Canada Thititip Tippayamontri, Benoit Paquette & Léon Sanche Sherbrooke Molecular Imaging Center, CRCHUS, Sherbrooke, QC, Canada Brigitte Guérin, René Ouellet, Otman Sarrhini, Jacques Rousseau & Roger Lecomte Department of Radiological Technology and Medical Physics, Faculty of Allied Health Sciences, Chulalongkorn University, Bangkok, Thailand Thititip Tippayamontri Brigitte Guérin René Ouellet Otman Sarrhini Jacques Rousseau Roger Lecomte Benoit Paquette Léon Sanche TT participated in the design of the study and data acquisition, performed the statistical analysis, and drafted the manuscript. BG, BP, LS, and RL participated in the design of the study, interpretation of the data, and revision of the manuscript. RO participated in the design of the study and data acquisition. OS and JR participated in the data acquisition and data analysis and revised the manuscript. All authors read and approved the final manuscript. Correspondence to Thititip Tippayamontri. The animal protocol approved by the Université de Sherbrooke Animal Care and Use Committee (protocol number: 235-14B) Figure S1. The relationship of absorbed dose measured by Fricke dosimeter as a function of exposure activity of 18F-FDG. (TIF 11 kb) Figure S2. The relationship of cumulated activity detected by PET imaging per administered activity of 18F-FDG. (TIF 25 kb) Supplementary information. (DOCX 13 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Tippayamontri, T., Guérin, B., Ouellet, R. et al. Intratumoral 18F-FLT infusion in metabolic targeted radiotherapy. EJNMMI Res 9, 33 (2019). https://doi.org/10.1186/s13550-019-0496-7 18F-fluorothymidine (18F-FLT) 5-Fluorouracil (5FU) Intratumoral (i.t.) infusion Targeted radiotherapy (TRT) 2-deoxy-2-[18F]-fluoro-D18 glucose (18F-FDG)
CommonCrawl
Akhtar, Kiran and Rahul were riding in a motorcar that was moving with a high velocity on Akhtar, Kiran and Rahul were riding in a motorcar that was moving with a high velocity on an expressway when an insect hit the windshield and got stuck on the windscreen. Akhtar and Kiran started pondering over the situation. Kiran suggested that the insect suffered a greater change in momentum as compared to the change in momentum of the motorcar (because the change in the velocity of the insect was much more than that of the motorcar). Akhtar said that since the motorcar was moving with a larger velocity, it exerted a larger force on the insect. And as a result the insect died. Rahul while putting an entirely new explanation said that both the motorcar and the insect experienced the same force and a change in their momentum. Comment on these suggestions. I agree with Rahul. This is because due to the law of conservation of momentum, during a collision, momentum of the system is conserved. Both the bodies suffer the same momentum change. However, the insect having smaller mass will suffer greater change in velocity. The motor car having much larger mass does not suffer any noticeable change in velocity. Anubhav Agarwal Asked in Physics Jun 18, 2022 A particle is moving with constant speed in a circular path. Find the ratio of average velocity to its instantaneous velocity when the particle describes an angle $\theta=\frac{\pi}{2}$ by Anubhav Agarwal AlokKumar Asked in Physics Mar 25, 2022 A particle of charge \(-16 \times 10^{-18}\) coulomb moving with velocity \(10 \mathrm{~m} \mathrm{~s}^{-1}\) along the \(X\)-axis enters a region AlokKumar Asked in Physics Nov 1, 2021 Two straight lines drawn on the same x-t curve make angles 30° and 60° with time axis. Which line represents greater velocity? What is the ratio of the two velocities? velocity-time A 60 kg body is pushed horizontally with just enough force to start it moving across a floor and the same force continues to act afterwards. A 1000 kg vehicle moving with a speed of 10 m/s is brought to rest in a distance of A block B is pushed momentarily along a horizontal surface with an initial velocity $v$. If $\mu$ is the coefficient of sliding friction between $B$ and the surface, block B will come to rest after a time : A person can run fast on a hard ground, but cannot swim that fast in water. A bus was moving with a speed of 54 km/h. On applying brakes it stopped in 8 seconds. Calculate the acceleration. It is observed that the value of acceleration due to gravity ' $g$ ' at a place is $980 \mathrm{~cm} / \mathrm{sec}^{2}$; obtain its value in a system in which metre is the unit of length and minute is the unit of time. A car weighing 2000 kg and moving with a speed of 20 m/s is stopped in 10 s on applying brakes.
CommonCrawl
Antioxidant and tyrosinase inhibition activity of the fertile fronds and rhizomes of three different Drynaria species Joash Ban Lee Tan1Email author and Yau Yan Lim1 © Tan and Lim. 2015 Received: 12 February 2014 Accepted: 7 September 2015 For generations, the rhizomes of Drynaria ferns have been used as traditional medicine in Asia. Despite this, the bioactivities of Drynaria rhizomes and leaves have rarely been studied scientifically. This study evaluates the antioxidant properties of the methanolic extracts of the fertile fronds and rhizomes from three species in this genus: Drynaria quercifolia, Drynaria rigidula and Drynaria sparsisora. The phenolic and flavonoid contents of the samples were respectively quantified with the total phenolic content (TPC) and total flavonoid content (TFC) assays, while the antioxidant activities were determined via measuring the DPPH radical scavenging activity (FRS), ferric reducing power (FRP), ferrous ion chelating (FIC) activity and lipid peroxidation inhibition (LPI). The tyrosinase inhibition activity of all three species was also reported. The fertile fronds of D. quercifolia were found to exhibit the highest overall TPC (2939 ± 469 mg GAE/100 g) and antioxidant activity amongst all the samples, and the fertile fronds of D. quercifolia and D. rigidula exhibited superior TPC and FRP compared to their rhizomes, despite only the latter being widely used in traditional medicine. The fronds of D. quercifolia had high tyrosinase inhibition activity (56.6 ± 5.0 %), but most of the Drynaria extracts showed unexpected tyrosinase enhancement instead, particularly for D. sparsisora's fronds. The high bioactivity of the fertile fronds in the fern species indicate that there is value in further research on the fronds of ferns which are commonly used mostly, or only, for their rhizomes. Fern rhizome Fertile fronds Fern leaves Free radical scavenging Tyrosinase inhibition Interest in phenolic compounds has been on the rise due to their potential human health benefits [1]. They are the most abundant class of plant antioxidants—compounds capable of deactivating or stabilizing free radicals, thereby reducing free-radical-mediated cellular and tissue damage [2]. Plant antioxidants have been purported to have anti-aging properties, and may prevent numerous diseases such as cancer, diabetes, neurodegenerative diseases [3], atherosclerosis, and cardiovascular diseases [4]. Some phenolic compounds are also able to inhibit tyrosinase, an enzyme responsible in melanogenesis and enzymatic browning. This has considerable commercial value in the cosmetics industry as a skin-whitening agent, or an anti-browning agent in the food industry [5]. Tyrosinase inhibition has also been discovered to reduce the viability of catecholaminergic neuronal cells [6] which have been linked to several psychiatric and neurodegenerative disorders, thus providing a possible new treatment in the future [7]. The rhizomes of the Drynaria genus of ferns have a long history of being used as traditional medicine, particularly in India [8], China [9] and Southeast Asia [10]. Drynaria quercifolia (L.) J. Smith is one of the best-known members of this genus, commonly used in Ayurvedic medicine ("Ashwakatri") [8] where its boiled rhizome decoction is consumed orally for its anti-pyretic properties; and used as a treatment for tuberculosis [10], diarrhea, cholera, fever, typhoid, syphilis and skin diseases [11]. Additionally, the extract of this fern was capable of inhibiting wildtype and multidrug-resistant bacteria such as Neisseria gonorrhoeae and Streptococcus-β-haemolyticus [10]. Due to the ethnobotanical value of the D. quercifolia's rhizomes, it has been the emphasis of phytochemical-centric research on the species. In comparison, little research has been conducted on the fertile fronds (leaves), which are sometimes used in conjunction with the rhizome to treat tuberculosis and throat infections [12], or as a poultice to reduce swelling [8]. The rhizomes of Drynaria rigidula (Sw.) Beddome and Drynaria sparsisora (Desv.) T. Moore have also been claimed to treat similar diseases as the rhizomes of D. quercifolia, such as diarrhea and gonorrhea [13]. However, unlike D. quercifolia, there have been no reports on the bioactivity of the fertile fronds of both species. D. rigidula is an endangered species in many locations [14], while D. sparsisora is nearly identical to D. quercifolia in physical appearance, but with shorter fertile fronds [15] and a dark-colored scaly rhizome [16]. This study represents the first time that the antioxidant activities of D. rigidula and D. sparsisora fertile fronds and rhizomes have ever been reported, and the comparison between the antioxidant activity of fertile fronds and rhizomes in a fern remains a rarely-explored avenue in most literature. This is also the first time that the tyrosinase inhibition activity for any of these species has ever been reported. Collection of fern samples The fertile fronds and rhizomes of D. quercifolia and D. rigidula were obtained from the Putrajaya Botanical Garden, Kuala Lumpur, while the leaves and rhizomes of D. sparsisora were obtained from Sunway, Petaling Jaya. The ferns were identified by plant taxonomist Anthonysamy S., formerly from University Putra Malaysia. Chemicals and reagents The various reagents used throughout this project were purchased from suppliers as follows. TPC analysis: Folin–Ciocalteu's phenol reagent (2 N, R and M Chemicals, Essex, UK), gallic acid (98 %, Fluka, Steinheim, France), anhydrous sodium carbonate (99 %, J. Kollin, UK); total flavonoid content (TFC) analysis: aluminium chloride (99.5 %, Bendosen Laboratory Chemicals, Bendosen, Norway), potassium acetate (99 %, R and M chemicals), quercetin (98 %, Sigma St. Louis, MO, USA); diphenyl-2-picrylhydrazyl (DPPH·) assay: 1,1-diphenyl-2-picrylhydrazyl (90 %, Sigma, St. Louis, MO, USA); ferric reducing power (FRP) assay: ferric chloride hexa-hydrate (100 %, Fisher Scientific, Loughborough, UK), potassium ferricyanide (99 %, Unilab, Auburn, Australia), trichloroacetic acid (99.8 %, HmbG Chemicals, Barcelona, Spain), potassium dihydrogen orthophosphate (99.5 %, Fisher Scientific, Loughborough, UK), dipotassium hydrogen phosphate (99 %, Merck, Darmstadt, Germany), iron chloride (99 %, R&M Chemicals, Petaling Jaya, Malaysia); ferrous ion chelating (FIC) assay: ferrozine (98 %, Acros Organics, Morris Plains, NJ, USA), ferrous sulphate hepta-hydrate (HmbG Chemicals, Barcelona, Spain), ethylenediaminetetraacetic acid (EDTA) (98 %, Sigma, St. Louis, MO, USA); lipid peroxidation inhibition (LPI): β-carotene (Sigma, St. Louis, MO, USA), chloroform (Fisher Scientific, 99.9 %, Loughborough, UK), linoleic acid, C18H32O2 (Fluka, Steinheim, France), Tween 40 (Fluka, Steinheim, France); tyrosinase inhibition activity: 3, 4-dihydroxy-l-phenylalanine C9H11NO4 (Sigma, St. Louis, MO, USA), dimethyl sulfoxide analytical grade (Fisher Scientific, Loughborough, UK), kojic acid C6H6O4 (Sigma, St. Louis, MO, USA), tyrosinase (catechol oxidasemonophenol, dihydroxyphenylalanine) 3400 units/mg solid (Sigma, St. Louis, MO, USA). Extraction of samples Fresh fertile fronds were extracted at a ratio of 50 mL 70 % methanol to 1 g of leaf material after liquid nitrogen-aided crushing. For the rhizomes, 1.0 g of de-skinned rhizome was extracted at a ratio of 50 mL 50 % methanol to 1 g of rhizome. The methanol concentrations chosen for extraction were based on preliminary extraction efficiency screening, where 70 and 50 % methanol were found to be more efficient at extracting the fertile fronds and rhizomes respectively. The extracts were then filtered with a Buchner funnel. The methanolic extracts were stored at −20 °C when not in use. Determination of antioxidant activity Determination of total phenolic content (TPC) The determination of the total phenolic content of the samples was done using a procedure modified from Kähkönen et al. [17] utilizing the Folin–Ciocalteu reagent. Samples (300 μL, in triplicate) were mixed with 1.5 mL of the 10 % Folin–Ciocalteu reagent, followed by an addition of 1.2 mL of 7.5 % (w/v) sodium carbonate (Na2CO3) solution. The test tubes were then left to stand for 30 min in the dark at room temperature before the absorbance values were measured at 765 nm. The total phenolic content was expressed as mg gallic acid equivalent per 100 g of sample (mg GAE/100 g). Total flavonoid content (TFC) Flavonoid content in the extract was determined with the aluminium chloride colorimetric method as described in Chang et al. [18]. Equal volumes of 10 % aluminium chloride and 1.0 M potassium acetate (0.1 mL each) were added to 0.5 mL of extract, followed by 2.8 mL of distilled water. The solutions were mixed well and incubated at room temperature for 30 min before the absorbance was taken at 415 nm. The flavonoid concentration was expressed as mg quercetin equivalent per 100 g sample, mg QE/100 g. DPPH radical scavenging assay (FRS) The DPPH· assay was based on the procedures described in Leong and Shui [19] and Miliauskas et al. [20] where the reduction of the DPPH (2,2-diphenyl-1-picrylhydrazyl) radical was measured spectrometrically to determine the radical scavenging activity of the extract. Two mL of DPPH· solution (5.9 mg in 100 mL methanol) was added to 1 mL of three different concentrations of the of sample extract (diluted with methanol). The absorbance of the solution was measured at 517 nm after a 30 min incubation time. The free radical scavenging activity (FRS) was expressed as ascorbic acid (AA) equivalent antioxidant capacity, in mg AA/100 g using the equation: \({\text{AEAC}} = {\text{IC}}_{{ 50({\text{AA}})}} /{\text{IC}}_{{ 50({\text{sample}})}} \; \times \; 10^{ 5}\). IC50 of AA used for calculation of FRS was 0.00387 mg/mL. Ferric reducing power (FRP) assay The reducing power of the extracts was determined using potassium hexacyanoferrate(III) as described in the procedure described by Tan and Chan [21]. The FRP assay was used to assess the ability of any antioxidants present in the extracts to reduce ferric ions (Fe3+) to ferrous ions (Fe2+). One mL of sample extract of different concentration (diluted with methanol) was added with 2.5 mL of 0.2 M phosphate buffer (pH 6.7) and the same volume of 1 % (w/v) potassium ferricyanide. The solutions were mixed and incubated in 50 °C water bath for 20 min. Subsequently, 2.5 mL of 10 % trichloroacetic acid was added to stop the reaction. Then, the solution in each test tube was separated into aliquots of 2.5 mL, added with 2.5 mL of Milli-Q water and 0.5 mL of 0.1 % FeCl3. The solutions were mixed and left on bench for 30 min before the absorbance was measured at 700 nm. FRP was expressed as mg gallic acid equivalent per gram of sample, mg GAE/g. Ferrous ion chelating (FIC) assay The determination of ferrous ion chelating strength of the extract was based on the procedures described in Mau et al. [22], and Singh and Rajini [23]. One mL of 0.1 mM FeSO4 was added to 1 mL of sample of different concentrations (0.2, 0.5 and 1 mL of extract, diluted with methanol), followed by 1 mL of 0.25 mM ferrozine. The mixtures were incubated at room temperature for 10 min before the absorbance was measured at 562 nm. It was expressed as the percentage of iron chelating activity. EDTA (0.017–0.067 mg/mL) was used as a positive control. Lipid peroxidation inhibition (LPI) Lipid peroxidation inhibition was adapted from Kumazawa et al. [24] based on the bleaching of β-carotene with slight modifications. Six mg of β-carotene was dissolved in 50 mL of chloroform, and 4 mL of the solution was mixed with 40 mg of linoleic acid and 400 mg of Tween 40 emulsifier in a conical flask. Nitrogen gas was then used to evaporate the chloroform. Then, 100 mL of oxygenated Milli-Q water was added and the flask was shaken for the mixture to be fully dissolved. Immediately after water was added, the absorbance of the mixture was measured at 470 and 700 nm using Perkin-Elmer double-beam spectrophotometer. Next, 3 mL of the emulsion was added into test tubes with different volume of sample (10, 50 and 100 μL). The tubes were sealed with parafilm and incubated at 50 °C water bath for an hour before the absorbance was measured again. For control, 100 μL of methanol was used instead of the sample. The blank was prepared by adding the same volume of sample with the emulsion of 400 mg Tween 40 emulsifier and 100 mL of water but without linoleic acid and β-carotene solution. The absorbance (A) at 700 nm was taken in order to correct the haze present in the solution. The LPI was calculated using the formula: $${\text{Degradation Rate }}\left( {\text{DR}} \right) \, = \, \left[ {{\text{Ln }}\left( {{\text{A}}_{\text{Initial}} / {\text{A}}_{\text{Sample}} } \right)} \right]/ 60$$ $${\text{Antioxidant Activity }}\left( {\% {\text{ AOA}}} \right) \, = \, \left[ { 1 { }{-} \, \left( {{\text{DR}}_{\text{Sample}} / {\text{DR}}_{\text{Control}} } \right)} \right]\; \times \; 100.$$ Determination of tyrosinase inhibition activity The measurement of tyrosinase inhibition activity of extracts was based on the method used by Masuda et al. [25], with slight modification. The method utilizes l-DOPA and a 96-well microplate for screening of multiple samples simultaneously. For testing of the sample extract, the wells in triplicate were added with 80 μL of buffer (0.1 M phosphate buffer, pH 6.8), 40 μL of tyrosinase (1 mg/mL of tyrosinase diluted 50-folds with Milli-Q water), and 40 μL of sample (1 mg of freeze-dried extract in 400 μL of 50 % DMSO), and lastly 40 μL of l-DOPA. For the control, the same reagents were added to the wells except that the sample was substituted with 40 μL of DMSO. The blank contained 120 μL of buffer, 40 μL of tyrosinase, and 40 μL of sample. The mixture in each well was mixed and left on the bench for 30 min (after the addition of l-DOPA). Subsequently, the microplate was measured using BIOTEK PowerWave XS Microplate Scanning Spectrophotometer at the absorbance of 475 nm, with reference wavelength at 700 nm. The percentage tyrosinase inhibition activity was calculated using equation below: $$\% {\text{ Tyrosinase inhibition activity}} = \, \left[ { 1 { }{-} \, \left( {{\text{A}}_{\text{Sample}} / {\text{A}}_{\text{Control}} } \right)} \right]\; \times \; 100.$$ Statistical analysis was carried out with one-way ANOVA and the Tukey HSD test was used to identify any significance between the TPC values of the samples. p < 0.05 was considered to be significantly different. The TPC, FRS, FRP and TFC results reported in Table 1 are a measure of the primary antioxidant activity—the ability to scavenge free radicals, thus inhibiting chain initiation and terminating chain propagation [26]. D. quercifolia fertile fronds exhibited a very high TPC of 2939 ± 469 mg GAE/100 g—consistent with previous findings reported by our research group where D. quercifolia fertile fronds ranked second highest in TPC amongst the fifteen ferns screened with an average TPC exceeding 2500 mg GAE/100 g [27]. D. quercifolia had the highest TPC of the three species screened: approximately three times higher than the fronds of D. rigidula and nearly ten-fold higher than the fronds of D. sparsisora; a surprising result given the similar physical appearance of the D. quercifolia and D. sparsisora fronds. The results for the FRS activity (IC50 and AEAC) reflected a similar correlation, with D. quercifolia fronds exhibiting the highest activity, followed by D. rigidula and D. sparsisora. However, despite the higher TPC and FRS in D. rigidula fronds compared to D. sparsisora fronds, the FRP between the fronds of both species were similar thus indicating that phenolic compounds may not necessarily be the only reducing agents present in the fronds of D. sparsisora. Total phenolic content (TPC), free radical scavenging activity (IC50 and ascorbic acid equivalent antioxidant capacity, AEAC), ferric reducing power (FRP) and total flavonoid content (TFC) of Drynaria sp fertile fronds and rhizomes TPC (mg GAE/100 g) IC50 (mg/mL) AEAC (mg AA/100 g) FRP (mg GAE/g) TFC (mg QE/100 g) D. quercifolia Frond 2939 ± 469a 0.09 ± 0.01a 17.5 ± 2.2a 0.1 ± 0.0a 1732 ± 437b 0.17 ± 0.03b 9.5 ± 1.3b D. rigidula 1031 ± 132c 0.79 ± 0.22c 507.3 ± 151.0c 2.3 ± 0.3c 305.5 ± 17.2d 380.6 ± 64.9c 1.0 ± 0.2d D. sparsisora 2.01 ± 0.62d Results are expressed as mean ± SD (n = 3). For each column, values followed by the same letter are not significantly different at p < 0.05 as measured by the Tukey HSD test When compared to the fertile fronds, the rhizomes of D. quercifolia and D. rigidula exhibited lower primary antioxidant activity, with lower TPC and weaker FRP activity (Table 1). Only in D. sparsisora was there no significant difference in TPC and FRP between the leaves and rhizomes, but the rhizomes showed significantly higher FRS than the fronds. Despite the TPC of D. sparsisora's leaves and rhizomes being comparable with one another, the free-radical scavenging capability of the rhizomes was considerably higher than would be expected based on the TPC alone. This indicates the rhizomes of D. sparsisora either contain non-phenolic free-radical scavengers, or the phenols in the rhizomes are better free radical scavengers than those in the fertile fronds. While it has been reported that the leaves of a plant may contain more phenols than the rhizomes [28, 29], this is nevertheless an interesting finding as thus far, only the rhizomes of these ferns have been used in ethnobotany. The lipid peroxidation inhibition (LPI) activity (Fig. 1), measures the ability of an antioxidant to inhibit lipid peroxidation, thus being a more accurate assessment of the hydrophobic antioxidants present. This is in contrast to the FRS activity (Table 1) that measures the antioxidant activity of both hydrophilic and hydrophobic compounds. All three species showed considerable antioxidant activity of 50 % or above at the maximum concentration tested (645.2 μg fresh sample/mL) in a concentration-dependant manner. The LPI between the fronds and rhizomes were comparable despite the differences in other facets of their antioxidant activity (Table 1), indicating the hydrophobic antioxidants present in both the fronds and rhizomes were comparable in antioxidant activity. Examples of such hydrophobic antioxidants would include terpenoids/carotenoids and tocopherols [30]. Lipid peroxidation inhibition (LPI) of fertile fronds and rhizomes of Drynaria sp. The figure shows bars of three different colors. The blue bar represents "66.4 μg/mL", the red bar represents "327.9 μg/mL" and the green bar represents "645.2 μg/mL" While the TPC, FRS, FRP, TFC and BCB assays are measures of primary antioxidant activity, ferrous ion chelating (FIC) activity is a measure of secondary antioxidant activity—the ability to prevent the formation of radicals, such as those generated by the Fenton reaction in the presence of free ferrous ions [31]. Interestingly, the fertile fronds and rhizomes of all three species showed a low ferrous ion chelating activity, with the fertile fronds of D. rigidula showing the highest chelating activity at approximately 45 % at 7 mg/mL (Fig. 2). This appears to be a common phenomena amongst ferns, as even those with exceptionally high TPC values exceeding 2500 mg GAE/100 g, such as Cyathea latebrosa, Cibotium barometz and Dicranopteris linearis, have been reported to exhibit poor chelating activity [27]. The phenolic compounds present in the Drynaria samples are likely weak iron chelators, and thus, weak secondary antioxidants. Ferrous iron chelating (FIC) activity in fertile fronds and rhizomes of Drynaria sp. The figure shows six lines with different colors. The darker blue line represents "Drynaria quercifolia fertile fronds". The red line represents "Drynaria rigidula fertile fronds". The green line represents "Drynaria sparsisora fertile fronds". The purple line represents "Drynaria quercifolia rhizome". The lighter blue line represents "Drynaria rigidula rhizome". The orange line represents "Drynaria sparsisora rhizome" As shown in Table 2, D. quercifolia fronds showed a high tyrosinase inhibition activity (exceeding 50 %) [32], and may therefore have possible commercial applications as a natural tyrosinase inhibitor, such as in the cosmetics industry (skin whitening) and food industry (antibrowning preservation) [5]. Interestingly however, most of the extracts exhibited tyrosinase enhancement activity, effectively acting as tanning agents. While reports on tyrosinase inhibition are more common, a number of extracts are known to increase melanogenesis, including kava rhizomes, lotus flowers and mangosteen leaves [33]. Although phenolic compounds are often tyrosinase inhibitors, several are capable of acting as tyrosinase activity enhancers, such as naringenin [5] and 4′-O-β-d-glucopyranosyl-(1→2)-β-d-glucopyranosyl-quercetin-3-O-β-d-glucopyranosyl-(1→4)-β-d-glucopyranoside [34]. Macrocycles such as cryptand and crown ethers are also capable of enhancing tyrosinase activity by acting as carriers for water molecules, which in turn increases the enzyme's activity [35]. Tyrosinase enhancement can see application in skin care products designed to reduce UV damage for skin cancer prevention [34], and as a treatment for hypopigmentation [33]. It can also prove useful in the cosmetics industry, for self-tanning [33]. The exact causal agent for this tyrosinase enhancement would require further investigation, particularly in the fronds of D. sparsisora, which showed remarkable tyrosinase enhancement activity. Tyrosinase inhibition (%) of Drynaria sp fertile fronds and rhizomes (0.5 mg/mL) Tyrosinase inhibition (%) 56.6 ± 5.0 −27.9 ± 11.1 −65.0 ± 6.9 −153.7 ± 10.1 Data in mean ± SD (n = 3). Negative values imply tyrosinase enhancement activity Of the three fern species screened, D. quercifolia fertile fronds exhibited the highest TPC, FRS, FRP and LPI. The fertile fronds of D. quercifolia and D. rigidula showed significantly higher TPC and FRP when compared to their rhizomes, with D. quercifolia fronds also exhibiting significantly higher FRS. D. sparsisora rhizomes on the other hand showed similar antioxidant activity with the fronds, with the exception of the rhizome's FRS being significantly higher than that of the fronds. Interestingly, regardless of the differences in other antioxidant properties, the LPI between the fronds and rhizomes of all three species were comparable at the highest concentration studied. The fronds of D. quercifolia showed good tyrosinase inhibition, while the fronds of D. sparsisora showed remarkable tyrosinase enhancement. These findings may hopefully provide further impetus into looking at the fronds of other ferns which are commonly used mostly for their rhizomes. JTBL carried out all the laboratory bench work; collected and analyzed the data; and wrote this paper. LYY conceptualized the project design and provided advice and guidance throughout the project and writing of this paper. Both authors read and approved the final manuscript. The authors wish to thank Monash University Malaysia for the financial support. School of Science, Monash University Malaysia, Bandar Sunway, 46150 Petaling Jaya, Selangor, Malaysia Lee JH, Park KH, Lee M, Kim H, Seo WD, Kim JY, Baek I, Jang DS, Ha TJ. Identification, characterisation, and quantification of phenolic compounds in the antioxidant activity-containing fraction from the seeds of Korean perilla (Perilla frutescens) cultivars. Food Chem. 2013;136(2):843–52.View ArticlePubMedGoogle Scholar Staszewski MV, Pilosof AMR, Jagus RJ. Antioxidant and antimicrobial performance of different argentinean green tea varieties as affected by whey proteins. Food Chem. 2011;125:186–92.View ArticleGoogle Scholar Bansal S, Choudhary S, Sharma M, Kumar SS, Lohan S, Bhardwaj V, Syan N, Jyoti S. Tea: a native source of antimicrobial agents. Food Res Int. 2013;53(2):568–84.View ArticleGoogle Scholar Cai Y, Luo Q, Sun M, Corke H. Antioxidant activity and phenolic compounds of 112 traditional Chinese medicinal plants associated with anticancer. Life Sci. 2004;74:2157–84.View ArticlePubMedGoogle Scholar Chang T. An updated review of tyrosinase inhibitors. Int J Mol Sci. 2009;10(6):2440–75.PubMed CentralView ArticlePubMedGoogle Scholar Higashi Y, Asanuma M, Miyazaki I, Ogawa N. Inhibition of tyrosinase reduces cell viability in catecholaminergic neuronal cells. J Neurochem. 2002;75(4):1771–4.View ArticleGoogle Scholar Grimm J, Mueller A, Hefti F, Rosenthal A. Molecular basis for catecholaminergic neuron diversity. Proc Natl Acad Sci USA. 2004;101(38):13891–6.PubMed CentralView ArticlePubMedGoogle Scholar Mithraja MJ, Irudayaraj V, Kiruba S, Jeeva S. Antibacterial efficacy of Drynaria quercifolia (L.) J. Smith (Polypodiaceae) against clinically isolated urinary tract pathogens. Asia-Pa. J Trop Biomed. 2012;2:S131–5.View ArticleGoogle Scholar Chang HC, Agrawal DC, Kuo CL, Wen JL, Chen CC, Tsay HS. In vitro culture of Drynaria fortunei, a fern species source of Chinese medicine "Gu-Sui-Bu". Vitro Cell Dev Biol Plant. 2007;43:133–9.View ArticleGoogle Scholar Khan A, Haque E, Rahman MM, Mosaddik A, Rahman M, Sultana N. Isolation of antimicrobial constituent from rhizome of Drynaria quercifolia and its sub-acute toxicological studies. DARU. 2007;15(4):205–11.Google Scholar Ramesh N, Viswanathan MB, Saraswathy A, Balakrishna K, Brindha P, Lakshmanaperumalsamy P. Phytochemical and antimicrobial studies on Drynaria quercifolia. Fitoterapia. 2001;72:934–6.View ArticlePubMedGoogle Scholar Sen A, Ghosh PD. A note on the ethnobotanical studies of some pteridophytes in Assam. Indian J Tradit Knowl. 2011;10:292–5.Google Scholar Johnson T. CRC Ethnobotany Desk Reference. San Francisco: California CRC Press; 1999.Google Scholar Yang F, Zhang C, Wu G, Liang S, Zhang X. Endangered Pteridophytes and their distribution in Hainan Island, China. Am Fern J. 2011;101(2):105–16.View ArticleGoogle Scholar Wee YC. A Guide to the Ferns of Singapore. 3rd ed. Singapore: Singapore Science Centre; 2002.Google Scholar Ranil RHG, Pushpakumara DKNG. Occurrence of Drynaria sparsisora (Desv.) T. Moore, in the lower Hantana area, Sri Lanka. J Natl Sci Found Sri Lanka. 2008;36(4):331–4.Google Scholar Kähkönen MP, Hopia AI, Vuorela HJ, Rauha JP, Pihlaja K, Kujala TS, Heihonen M. Antioxidant activity of plant extracts containing phenolic compounds. J Agric Food Chem. 1999;47(10):3954–62.View ArticlePubMedGoogle Scholar Chang C, Yang M, Wen H, Chern J. Estimation of total flavonoid content in propolis by two complementary colorimetric methods. J Food Drug Anal. 2002;10(3):178–82.Google Scholar Leong L, Shui G. An investigation of antioxidant capacity of fruits in Singapore markets. Food Chem. 2002;76(1):69–75.View ArticleGoogle Scholar Miliauskas G, Venskutonis PR, van Beek TA. Screening of radical scavenging activity of some medicinal and aromatic plant extracts. Food Chem. 2004;85(2):231–7.View ArticleGoogle Scholar Tan YP, Chan EWC. Antioxidant, antityrosinase and antibacterial properties of fresh and processed leaves of Anacardium occidentale and Piper betle. Food Biosci. 2014;6:17–23.View ArticleGoogle Scholar Mau JL, Lai EYC, Wang NP, Chen CC, Chang CH, Chyau CC. Composition and antioxidant activity of the essential oil from Curcuma zedoaria. Food Chem. 2003;82(4):583–91.View ArticleGoogle Scholar Singh N, Rajini PS. Free radical scavenging activity of an aqueous extract of potato peel. Food Chem. 2004;85:611–6.View ArticleGoogle Scholar Kumazawa S, Taniguchi M, Suzuki Y, Shimura M, Kwon MS, Nakayama T. Antioxidant activity of polyphenols in carob pods. J Agric Food Chem. 2002;50:373–7.View ArticlePubMedGoogle Scholar Masuda T, Yamashita D, Takeda Y, Yonemori S. Screening for tyrosinase inhibitors among extracts of seashore plants and identification of potent inhibitors from Garcinia subelliptica. Biosci Biotech Biochem. 2005;69(1):197–201.View ArticleGoogle Scholar Lim YY, Quah EPL. Antioxidant properties of Phyllanthus amarus extracts as affected by different methods. Food Chem. 2007;103:734–40.View ArticleGoogle Scholar Lai HY, Lim YY. Evaluation of antioxidant activities of the methanolic extracts of selected ferns in Malaysia. Int J Environ Sci Dev. 2011;2:442–7.View ArticleGoogle Scholar Chan EWC, Lim YY, Omar M. Antioxidant and antibacterial activity of leaves of Etlingera species (Zingiberaceae) in Peninsular Malaysia. Food Chem. 2007;104:1586–93.View ArticleGoogle Scholar Elzaawely AA, Xuan TD, Tawata S. Essential oils, kava pyrones and phenolic compounds from leaves and rhizomes of Alpinia zerumbet (Pers.) B.L. Burtt. & R.M. Sm. and their antioxidant activity. Food Chem. 2006;103(2):486–94.View ArticleGoogle Scholar Graβmann J. Terpenoids as plant antioxidants. Vitam Horm. 2005;72:505–35.View ArticleGoogle Scholar Chandrasekara A, Shahidi F. Bioaccessibility and antioxidant potential of millet grain phenolics as affected by simulated in vitro digestion and microbial fermentation. J Funct Foods. 2012;4(1):226–37.View ArticleGoogle Scholar Baurin N, Arnoult E, Scior T, Do QT, Bernard P. Preliminary screening of some tropical plants for anti-tyrosinase activity. J Ethnopharmacol. 2002;82:155–8.View ArticlePubMedGoogle Scholar Hamid MA, Sarmidi MR, Park CS. Mangosteen leaf extract increases melanogenesis in B16F1 melanoma cells by stimulating tyrosinase activity in vitro and by up-regulating tyrosinase gene expression. Int J Mol Med. 2012;29(2):209–17.PubMedGoogle Scholar Yamauchi K, Mitsunaga T, Batubara I. Novel quercetin glucosides from Helminthostachys zeylanica root and acceleratory activity of melanin biosynthesis. J Nat Med. 2013;67(2):369–74.View ArticlePubMedGoogle Scholar Broos J, Arends R, van Dijk GB, Verboom W, Engbersen JFJ, Reinhoudt DN. Enhancement of tyrosinase activity by macrocycles in the oxidation of p-cresol in organic solvents. J Chem Soc Perk T. 1996;1:1415–7.View ArticleGoogle Scholar
CommonCrawl
Zap. Nauchn. Sem. POMI: Zap. Nauchn. Sem. POMI, 2010, Volume 383, Pages 77–85 (Mi znsl3873) This article is cited in 2 scientific papers (total in 2 papers) On the components of the lemniscate containing no critical points of a polynomial other than its zeros V. N. Dubinin Institute of Applied Mathematics, Far-Eastern Branch of the Russian Academy of Sciences, Vladivostok, Russia Abstract: Let $P$ be a complex polynomial of degree $n$ and let $E$ be a connected component of the set $ż\colon|P(z)|\leq1\}$ containing no critical points of $P$ other than its zeros. We prove the inequality $|(z-a)P'(z)/P(z)|\leq n$ for all $z\in E\setminus\{a\}$, where $a$ is the zero of the polynomial $P$ lying in $E$. Equality is attained for $P(z)=cz^n$ and any $z$, $c\neq0$. Bibl. 4 titles. Key words and phrases: polynomial, lemniscate, Steiner symmetrization. Journal of Mathematical Sciences (New York), 2011, 178:2, 158–162 UDC: 517.54 Citation: V. N. Dubinin, "On the components of the lemniscate containing no critical points of a polynomial other than its zeros", Analytical theory of numbers and theory of functions. Part 25, Zap. Nauchn. Sem. POMI, 383, POMI, St. Petersburg, 2010, 77–85; J. Math. Sci. (N. Y.), 178:2 (2011), 158–162 \Bibitem{Dub10} \by V.~N.~Dubinin \paper On the components of the lemniscate containing no critical points of a~polynomial other than its zeros \inbook Analytical theory of numbers and theory of functions. Part~25 \serial Zap. Nauchn. Sem. POMI \publ POMI \publaddr St.~Petersburg \mathnet{http://mi.mathnet.ru/znsl3873} \jour J. Math. Sci. (N. Y.) \crossref{https://doi.org/10.1007/s10958-011-0534-0} http://mi.mathnet.ru/eng/znsl3873 http://mi.mathnet.ru/eng/znsl/v383/p77 V. N. Dubinin, "Methods of geometric function theory in classical and modern problems for polynomials", Russian Math. Surveys, 67:4 (2012), 599–684 V. N. Dubinin, A. S. Afanaseva-Grigoreva, "O lemniskatakh ratsionalnykh funktsii", Dalnevost. matem. zhurn., 17:2 (2017), 201–209
CommonCrawl
Implication Zroupoids and Birkhoff Systems J.M. Cornejo 1 H.P. Sankappanavar 2 1 Departamento de Matem'atica, Universidad Nacional del Sur, Alem 1253, Bah'ia Blanca, Argentina, INMABB - CONICET. 2 Hanamantagouda P. Sankappanavar, Department of Mathematics, State University of New York, New Paltz, New York 12561, U.S.A. An algebra $\mathbf A = \langle A, \to, 0 \rangle$, where $\to$ is binary and $0$ is a constant, is called an implication zroupoid ($\mathcal{I}$-zroupoid, for short) if $\mathbf{A}$ satisfies the identities: $(x \to y) \to z \approx [(z' \to x) \to (y \to z)']'$, where $x' : = x \to 0$, and $ 0'' \approx 0$. These algebras generalize De Morgan algebras and $\vee$-semilattices with zero. Let $\mathcal{I}$ denote the variety of implication zroupoids. The investigations into the structure of $\mathcal{I}$ and of the lattice of subvarieties of $\mathcal{I}$, begun in 2012, have continued in several papers (see the Bibliography at the end of the paper). The present paper is a sequel to that series of papers and is devoted to making further contributions to the theory of implication zroupoids. The main purpose of this paper is to prove that if $\mathbf{A}$ is an algebra in the variety $\mathcal{I}$, then the derived algebra $\mathbf{A}_{mj} := \langle A; \wedge, \vee \rangle$, where $a \land b := (a \to b')'$ and $a \lor b := (a' \land b')'$, satisfies the Birkhoff's identity (BR): $x \land (x \lor y) \approx x \lor (x \land y)$. As a consequence, the implication zroupoids $\mathbf A$ whose derived algebras $\mathbf{A}_{mj}$ are Birkhoff systems are characterized. Another interesting consequence of the main result is that there are bisemigroups that are not bisemilattices but satisfy the Birkhoff's identity, which leads us naturally to define the variety of "Birkhoff bisemigroups'' as bisemigroups satisfying the Birkhoff identity, as a generalization of Birkhoff systems. The paper concludes with some open problems. Implication zroupoid symmetric implication zroupoid bisemigroup Birkhoff identity Birkhoff system implication semigroup Birkhoff bisemigroup [1] R. Balbes, P.H. Dwinger, Distributive lattices, University of Missouri Press, Columbia, 1974. [2] B.A. Bernstein, A set of four postulates for Boolean algebras in terms of the implicative operation, Transactions of the American Mathematical Society, 36 (1934), 876{884. [3] G. Birkho , Lattice theory, Second ed., American Mathematical Society Colloquium Publications, 25, American Mathematical Society, Providence, R.I., 1948. [4] R.A. Borzooei, M. A. Kologani, Results on hoops, Journal of Hyper Algebraic Structures and Logical algebras, 1(1) (2020), 61-77. [5] S. Burris, H.P. Sankappanavar, A course in universal algebra, Springer-Verlag, New York, 1981. [6] D. Busneag, D. Piciu, M. Istrata, The Belluce lattice associated with a bounded BCK-algebra, Journal of Hyper Algebraic Structures and Logical algebras, 2(1) (2021), 1-16. [7] J.M. Cornejo, H.P. Sankappanavar, Order in implication zroupoids, Studia Logica, 104(3) (2016), 417{ [8] J.M. Cornejo, H.P. Sankappanavar, Semisimple varieties of implication zroupoids, Soft Computing, 20(3) (2016), 3139{3151. [9] J.M. Cornejo, H.P. Sankappanavar, On implicator groupoids, Algebra Universalis, 77(2) (2017), 125{ [10] J.M. Cornejo, H.P. Sankappanavar, On derived algebras and subvarieties of implication zroupoids, Soft Computing, 21(23) (2017), 6963{6982. [11] J.M. Cornejo, H.P. Sankappanavar, Symmetric implication zroupoids and identities of Bol-Moufang type, Soft Computing, 22 (2018), 4319{4333. [12] J.M. Cornejo, H.P. Sankappanavar, Implication zroupoids and identities of associative type, Quasigroups and Related Systems, 26 (2018), 13{34. [13] J.M. Cornejo, H.P. Sankappanavar, Symmetric implication zroupoids and weak associative identities, Soft Computing, 23(6) (2019), 6797{6812. [14] J.M. Cornejo, H.P. Sankappanavar, Semi-distributivity and whitman property in implication Zroupoids, Mathematica Slovaka, (2021), 10 pages. [15] S.V. Gusev, H.P. Sankappanavar, B.M. Vernikov, Implication semigroups, Order, 37 (2020), 1{7. [16] J. Harding, A. Romanowska, Varieties of Birkho systems, Part I, Order, 34 (2017), 45{68. [17] W. McCune, Prover 9 and Mace 4, (2005-2010). http://www.cs.unm.edu/mccune/prover9/ [18] J. Plonka, On distributive quasilattices, Fundamenta Mathematicae, 60 (1967), 191{200 [19] H. Rasiowa, An algebraic approach to non-classical logics, North{Holland, Amsterdam, 1974. [20] H.P. Sankappanavar, De Morgan algebras: New perspectives and applications, Scientia Mathematicae Japonicae, 75(1) (2012), 21{50. Cornejo, J., & Sankappanavar, H. (2021). Implication Zroupoids and Birkhoff Systems. Journal of Algebraic Hyperstructures and Logical Algebras, 2(4), 1-12. doi: 10.52547/HATEF.JAHLA.2.4.1 J.M. Cornejo; H.P. Sankappanavar. "Implication Zroupoids and Birkhoff Systems". Journal of Algebraic Hyperstructures and Logical Algebras, 2, 4, 2021, 1-12. doi: 10.52547/HATEF.JAHLA.2.4.1 Cornejo, J., Sankappanavar, H. (2021). 'Implication Zroupoids and Birkhoff Systems', Journal of Algebraic Hyperstructures and Logical Algebras, 2(4), pp. 1-12. doi: 10.52547/HATEF.JAHLA.2.4.1 Cornejo, J., Sankappanavar, H. Implication Zroupoids and Birkhoff Systems. Journal of Algebraic Hyperstructures and Logical Algebras, 2021; 2(4): 1-12. doi: 10.52547/HATEF.JAHLA.2.4.1
CommonCrawl
Energy optimisation of hybrid off-grid system for remote telecommunication base station deployment in Malaysia Mohammed H Alsharif1, Rosdiadee Nordin1 & Mahamod Ismail1 Cellular network operators are always seeking to increase the area of coverage of their networks, open up new markets and provide services to potential customers in remote rural areas. However, increased energy consumption, operator energy cost and the potential environmental impact of increased greenhouse gas emissions and the exhaustion of non-renewable energy resources (fossil fuel) pose major challenges to cellular network operators. The specific power supply needs for rural base stations (BSs) such as cost-effectiveness, efficiency, sustainability and reliability can be satisfied by taking advantage of the technological advances in renewable energy. This study investigates the possibility of decreasing both operational expenditure (OPEX) and greenhouse gas emissions with guaranteed sustainability and reliability for rural BSs using a solar photovoltaic/diesel generator hybrid power system. Three key aspects have been investigated: (i) energy yield, (ii) economic factors and (iii) greenhouse gas emissions. The results showed major benefits for mobile operators in terms of both environmental conservation and OPEX reduction, with an average annual OPEX savings of 43% to 47% based on the characteristics of solar radiation exposure in Malaysia. Finally, the paper compares the feasibility of using the proposed approach in a four-season country and compares the results against results obtained in Malaysia, which is a country with a tropical climate. The unexpected increase in subscribers and demand for high-speed data has led to tremendous growth in cellular networks in the last several years. In 2013, the number of mobile subscribers reached 6.8 billion [1], of whom 1 billion were in rural areas (off-grid sites) [2]. To access these new markets and provide a service to potential customers in remote rural areas, the number of BSs has been increased to fulfil the needs of mobile subscribers and increase the coverage area. According to [3], BSs are considered the primary source of energy consumption in cellular networks, accounting for 57% of the total energy used and putting the cellular operators under immense pressure to solve the problem of electricity supply in a reliable and cost-effective way. However, electrical grids are typically not available in these remote locations due to the geographical limitations (challenging terrain) that make access to these sites difficult. Therefore, supplying power to an off-grid BS is a significant challenge. Traditionally, a diesel generator (DG) is used to supply electrical power to a base station at an off-grid site [4]. Nevertheless, the concept of using DGs to power BSs has become much less viable for network companies looking to expand and deliver their services to potential new customers for the following reasons: Economic aspect: most of the costs go to the fuel, which is expensive, and the fuel price will continue to increase in the future. Additionally, the transfer of fuel to these remote places characterised by challenging terrain may require special means of transport, such as helicopters, which increases the cost of the operation [5]. Figure 1 presents the costs of operating DGs, which include the initial capital cost, installation cost, operating and maintenance (O&M) cost and fuel cost [6]. Net costs of operating a DG [6]. Environmental impact: air pollution by harmful components emitted from diesel fuels such as carbon dioxide, sulphur dioxide and nitrogen oxides causes global warming, the depletion of the ozone layer, cancer, genetic mutations and acid rain. According to [7], the ICT sector's CO2 emissions will rise to 349 MtCO2 by 2020, with 51% (179 MtCO2) of the emissions coming from the mobile sector. Accordingly, increasing awareness of the environmental impact of greenhouse gas emissions and the exhaustion of non-renewable energy resources (fossil fuel) has highlighted the critical need to improve the energy efficiency of cellular networks. Technical issues: the efficiency of the DG system is low, with only approximately 30% of the fuel energy being converted to electrical energy and the rest lost as heat [2]. In addition, BS sites supplied by DGs are not reliable; 65% of the telecom service losses are due to outages resulting from a variety of the failures from which these generators suffer [2]. When a failure occurs, the input of substantial time and money is generally required to bring a technician on site to perform the necessary repairs. These drawbacks have motivated cellular network operators to search for solutions to both promote environmental conservation and reduce capital and operational costs. The specific power supply requirements for rural BSs, such as cost-effectiveness, efficiency, sustainability and reliability, can be met by utilising the technological advances in renewable energy. The network equipment deployed in these remote locations must be optimised so that they save energy while providing the best service with the largest coverage feasible. If network companies can install renewable energy supplies for their remote sites, they would gain access to hundreds of millions of potential new customers. Furthermore, by adopting renewable energy sources, network operators can reduce their operating expenses and have a positive impact on the environment via the reduction of harmful gas emissions. The rest of this paper is organised as follows. Section 2 presents studies that have investigated the possibility of reducing both the OPEX and greenhouse gas emissions through the use of renewable energy systems in different zones of the world. Section 3 discusses the potential for using renewable energy to supply the BSs in remote places in Malaysia, and Section 4 describes the use of solar energy in Malaysia, including the characteristics of the solar radiation of Malaysia and the barriers to using solar photovoltaic (SPV) panels in Malaysia, as well as some recommendations. In Section 5, the system architecture for the hybrid power model to supply long-term evolution (LTE)-BS is described, and Section 6 presents the mathematical model. Section 7 briefly introduces the hybrid optimisation model for electric renewables (HOMER) software, a hybrid power system modelling tool used in this study, and Section 8 includes the simulation configuration. Section 9 presents the results and discussion. Section 10 presents the comparison and estimation of the feasibility of using the solar energy approach in Germany by using Malaysia as the central point of reference, and Section 11 concludes this paper. Over the last 5 years, significant developments have been made in integrating electrical grids and renewable energy into a smart grid to manage the power supply of BS sites. In India, efforts have been made to optimise the size of wind turbine generators (WTGs), SPV arrays and other components for a hybrid power system; generator-based power supplies for global system for mobile communication (GSM, alternatively referred to as 2G) and code division multiple access (CDMA, 3G) standards have also been investigated [8,9]. Reference [10] studied the feasibility of implementing a SPV/diesel hybrid power generation system suitable for a GSM base station site in Nigeria. Waqas et al. [11] presented the design idea for a solar system with a diesel generator as a backup source for a GSM cellular network standard in Pakistan. Reference [12] studied the feasibility of implementing an SPV/diesel hybrid power generation system suitable for a GSM base station site in Bangladesh. Martinez-Diaz et al. [13] discussed a photovoltaic (PV)-wind-diesel-battery system for a station in Spain. In Nepal, reference [6] studied the optimisation of a hybrid PV-wind power system for a remote telecom station. Kanzumba et al. [2] investigated the possibility of using hybrid photovoltaic/wind renewable systems as primary sources of energy to supply mobile telephone base transceiver stations in the rural regions of the Republic of the Congo. Reference [14] discussed three types of renewable energy: (i) a SPV-battery system, (ii) an SPV-fuel cell (FC) system and (iii) an SPV-FC-battery system. The modelling and size optimisation of such hybrid systems feeding a stand-alone direct current (DC) load at a telecom base station have been carried out using the HOMER software. Vincent et al. [15] proposed a hybrid (solar and hydro) and DG system based on the power system models for powering stand-alone BS sites. Table 1 provides a summary of related works that have been investigated for green wireless network optimisation strategies within smart grid environments. Table 1 Summary of the approach that was discussed Potential for applying renewable energy as the energy supply for BSs in remote sites in Malaysia Malaysia lies entirely within the equatorial region, between 1° and 7° N and 100° and 120° E, offering an abundant potential for the use of renewable energy resources, especially solar and wind power [16]. In the early 1980s, a study of Malaysia's wind energy was undertaken at University Kebangsaan Malaysia (UKM). The Solar Energy Research Group of UKM collected wind data from ten stations throughout the country over a period of 10 years from 1982 to 1991. The data studied include the hourly wind speed at the stations, which are located mostly at airports and near coasts, where the land and sea breezes may influence the wind regime [17]. The study showed that the mean wind speed is low, not exceeding 2 m/s. However, the wind does not blow uniformly, varying by month and region. The greatest wind power potential is that of Mersing and Kuala Terengganu, which are located on the East Coast of Peninsular Malaysia [18]. In 2014, reference [19] presented a study on predicting the wind speed in these states over the long-term using neural networks and a set of recent wind speed measurement samples from the two meteorological stations in these states. The results showed that the future mean wind speed will remain low, not exceeding 3 m/s. Therefore, wind turbines are rarely used in Malaysia and were excluded from this study. Regarding solar energy, Malaysia has a stable climate throughout the year. Hence, the solar radiation in Malaysia is highly relative to global standards. Malaysia's solar power is estimated to be four times the power of the world's fossil fuel resources [20]. The global irradiation fluctuated in the range of 2 to 6 kWh/m2/day. The second part of the year (October to February) is characterised by more cloud cover and thus poorer solar potential than the first part of the year (March to October), and the average temperature ranges from 33°C during the day to 23°C at night [21]. Moreover, solar cells have low-maintenance needs and high reliability, with an expected lifespan of 20 to 30 years. One square metre of solar panelling in Malaysia is estimated to result in an annual reduction of 40 kg of CO2 [18], making solar power a promising future energy source for telecommunication applications. Solar energy in Malaysia Solar radiation data in Malaysia have been the subject of earlier studies. Malaysia's climatic conditions are desirable for extending the utilisation of SPV systems due to the high amount of solar radiation received throughout the year. The northern region and a few places in eastern Malaysia receive the highest amount of solar radiation throughout the year. The lowest irradiance value is obtained for Kuching, whereas Kota Kinabalu has the highest measured solar radiation [16]. Figure 2 provides information on solar radiation in different states of Malaysia. Annual average solar radiation (MJ/m 2 /day) [16]. Figure 2 reveals that Sabah, Perlis and Kedah have sufficient solar resources to support solar energy applications. Figure 3 presents the daily solar radiation in these three states. Average daily radiation (kWh/m 2 ) [18]. The average daily solar radiation in Kedah, Sabah and Perlis is equal to 5.48, 5.31 and 5.24 kWh/m2, respectively. According to reference [22], the average solar radiation in these states indicates the high potential of SPV where the solar radiation is higher than 5 kWh/m2/day. Moreover, Figure 4 shows Malaysia's average daily solar radiation, which is estimated to be 5.15 kWh/m2 [18]. Average daily solar energy received in Malaysia [18]. Accordingly, this study will investigate five average daily solar radiation values to cover all the states of Malaysia: 5.1, 5.2, 5.3, 5.4 and 5.5 kWh/m2. Barriers to using SPV panels for remote areas in Malaysia The aforementioned discussion reveals that Malaysia has excellent solar energy potential. However, some barriers that affect the performance of SPV panels should be considered, as power shortages are not admissible in the cellular network sector. One of the major drawbacks of using solar cells is shade, which may dramatically reduce the power produced by the solar cell. In some seasons, tropical countries experience heavy rain and cloudiness that may continue for several days, causing battery banks to run out of charge more quickly. To solve this problem, the present study suggests the hybridisation of the solar power system with existing backup DG in rural areas, which will provide BSs with a sustainable and reliable power supply, especially if the battery lifespan is short. Dirt, dust, tree debris, moss, sap, water spots, mould, etc. on solar panels have a significant impact on the performance of solar power systems. Cleaning the panels is also a problem. The SPV panels are installed at relatively high sites to maximise their access to the sunlight, and moss and grass grow quickly on the panels, increasing the cost of cleaning the panels. Therefore, we recommend installing solar panels on tilted roofs (with tilts of as little as 5° to 10°) to allow wind, rain and gravity to remove most debris and dust naturally. In addition, the rapid growth of the surrounding trees will shade the panels, decreasing the performance of PV panels. Solar PV panels mounted on high poles also attract lightning strikes, the destructive voltage of which destroys the electronic components. Furthermore, the bypass diodes, which are mounted in the termination box under each panel, may crack under high humidity, heat and short circuits. One of the solutions is to engage trained educated villagers to operate, maintain and manage the solar panels. Figure 5 is a schematic showing two subsystems: the BS and the hybrid energy source. System model of an adaptive power management scheme for remote LTE-BS. Base station subsystem The BS, a centrally located set of equipment used to communicate with mobile units and the backhaul network, consists of multiple transceivers (TRXs), which in turn consist of a power amplifier (PA) that amplifies the input power, a radio-frequency (RF) small-signal transceiver section, a baseband (BB) for system processing and coding, a DC-DC power supply, a cooling system and an alternating current (AC)-DC unit for connection to the electrical power grid. Table 2 summarises the power consumption of the different pieces of LTE-macro BS equipment for a 2 × 2 MIMO configuration, and three sectors where the total input power (Pin) needed by the LTE-macro BS is 964.9 W. More details on the BS internal components can be found in [23]. Table 2 Power consumption of the LTE-BS hardware elements [23] Hybrid energy source subsystem The main components of a hybrid energy source subsystem are listed below: Solar panels: responsible for collecting sunlight and converting the sunlight into DC electricity. Diesel generator: used as a secondary energy source during the peak demand or in the case of battery depletion. The optimum operation range for a diesel generator is between 70% and 89% of its rated power [24]. Battery bank: stores excess electricity for future consumption by the BS at night, during load-shedding hours, or if the available solar energy is not sufficient to feed the BS load completely. To protect the battery, inclusion of a charge controller is recommended. A charge controller or battery regulator limits the rate at which the electric current is added to or drawn from electric batteries, prevents overcharging and may protect against overvoltage, which can reduce battery performance or lifespan and may pose a safety risk. A charge controller may also prevent the complete draining ('deep discharging') of a battery or perform controlled discharges, depending on the battery technology, to protect battery life [24]. Inverter: converts the DC voltage from the load bus-bar and battery to AC voltage at a high efficiency to satisfy the (BS) requirement of the main load for uninterruptible AC power. The inverter is also able to log information such as system performance (e.g. electricity produced by the system on a daily, monthly or yearly basis) and safety measures to avoid electrical mishaps [24]. Control system: serves as the brain of a complex control, regulation and communication system. The most common communication units in the remote interface are wireless modems or network solutions. In addition to the control functions, data logger and alarm memory capabilities are of high importance. All power sources working in parallel are managed by a sophisticated control system and share the load with their capabilities to accommodate the fact that power shortages are not admissible in the cellular telephony sector. Mathematical model The SPV generator contains modules that are composed of many solar cells interconnected in series/parallel to form a solar array. HOMER calculates the energy output of the SPV array (ESPV) by using the following equation [25]: $$ {E}_{\mathrm{SPV}}={Y}_{\mathrm{SPV}}\times \mathrm{P}\mathrm{S}\mathrm{H}\times {f}_{\mathrm{SPV}} $$ where Y SPV is the rated capacity of the SPV array (kW) and PSH is a peak solar hour which is used to express solar irradiation in a particular location when the sun is shining at its maximum value for a certain number of hours. Because the peak solar radiation is 1 kW/m2, the number of peak sun hours is numerically equal to the daily solar radiation in kWh/m2 [18] and f SPV is the SPV derating factor (sometimes called the performance ratio), a scaling factor meant to account for effects of dust on the panel, wire losses, elevated temperature or anything else that would cause the output of the SPV array to deviate from the expected output under ideal conditions. In other words, the derating factor refers to the relationship between actual yield and target yield, which is called the efficiency of the SPV. Today, due to improved manufacturing techniques, the performance ratio of solar cells increased to 85% to 95%. The energy generated (EDG) by a DG with a given rated power output (PDG) is expressed in Equation 2 [25]: $$ {E}_{\mathrm{DG}}={P}_{\mathrm{DG}}\times \mu \times t $$ where μ is the efficiency of the DG. Moreover, the fuel consumption (FC) is calculated as follows [25]: $$ {F}_{\mathrm{C}}={E}_{\mathrm{DG}}\times {F}_{\mathrm{spe}} $$ where E DG is the energy production (kWh) and F spe is the specific fuel consumption (L/kWh). Battery model The battery characteristics that play a significant role in designing a hybrid renewable energy system are battery capacity (Ah), battery voltage (V), battery state of charge (%), depth of discharge (%), days of autonomy (h), efficiency (%) and lifetime of battery (year). The 'Trojan L16P' battery model, which provides good characteristics as shown in Figure 6 combined with low cost, is used in this paper. More details can be found in [26]. 'Trojan L16P' battery model characteristics. The nominal capacity of the battery bank is the maximum state of charge SOCmax of the battery. The minimum state of charge of the battery, SOCmin, is the lower limit that does not discharge below the minimum state of charge, which is 30% in the 'Trojan L16P' battery model, as shown in Figure 6. The depth of discharge (DOD) is used to describe how deeply the battery is discharged and is expressed in Equation 4 [6]: $$ \mathrm{D}\mathrm{O}\mathrm{D}=1-{\mathrm{SOC}}_{min} $$ Based on Equation 4, the DOD for the 'Trojan L16P' battery is 70%, which means that the battery has delivered 70% of its energy and has 30% of its energy reserved. DOD can always be treated as how much energy the battery delivered. A battery bank is used as a backup system and is sized to meet the load demand when the renewable energy resources failed to satisfy the load. The number of days a fully charged battery can feed the load without any contribution of auxiliary power sources is represented by days of autonomy. The battery bank autonomy is the ratio of the battery bank size to the electric load (LTE-BS). HOMER calculates the battery bank autonomy (A batt) by using the following equation [25]: $$ {A}_{\mathrm{batt}}=\frac{N_{\mathrm{batt}}\times {V}_{\mathrm{nom}}\times {Q}_{\mathrm{nom}}\left(1-\frac{{\mathrm{SOC}}_{min}}{100}\right)\left(24\ \mathrm{h}/\mathrm{day}\right)}{L_{\mathrm{prim}-\mathrm{a}\mathrm{v}\mathrm{g}}\left(1,000\mathrm{Wh}/\mathrm{k}\mathrm{W}\mathrm{h}\right)} $$ where N batt is the number of batteries in the battery bank, V nom is the nominal voltage of a single battery (V), Q nom is the nominal capacity of a single battery (Ah) and L prim,ave is the average daily LTE-BS load (kWh). Battery life is an important factor that has a direct impact on replacement costs. Two independent factors may limit the lifetime of the battery bank: the lifetime throughput and the battery float life. HOMER calculates the battery bank life (R batt) based on these two factors as given in the following equation [25]: $$ {R}_{\mathrm{batt}}= \min \left(\frac{N_{\mathrm{batt}}\times {Q}_{\mathrm{lifetime}}}{Q_{\mathrm{thrpt}}},{R}_{\mathrm{batt},\mathrm{f}}\right) $$ where Q lifetime is the lifetime throughput of a single battery (kWh), Q thrpt is the annual battery throughput (kWh/year) and R batt,f is the battery float life (year). DC/AC inverter Inverters convert electrical energy from the DC form into the AC form with the desired frequency of the load. The efficiency of the inverter is assumed to be roughly constant over the entire working range (e.g. 90%) [27]. The optimum criteria, including economic, technical and environmental feasibility parameters, were analysed using the HOMER software package developed by the National Renewable Energy Laboratory (NREL). HOMER hybrid power system modelling software HOMER [25] is an optimisation software package used to simulate various renewable energy source (RES) system configurations and scale them based on the net present cost (NPC). The NPC represents the life cycle cost of the system. The calculation assesses all costs that occur within the project lifetime, including initial setup costs (IC), component replacements within the project lifetime and maintenance. Figure 7 presents the architecture of the HOMER software. Architecture of HOMER software. HOMER calculates the NPC according to the following equation [25]: $$ \mathrm{N}\mathrm{P}\mathrm{C} = \frac{\mathrm{TAC}}{\mathrm{CRF}} $$ where TAC is the total annualised cost ($). The capital recovery factor (CRF) is given by: $$ \mathrm{C}\mathrm{R}\mathrm{F}=\frac{i{\left(1+i\right)}^n}{{\left(1+i\right)}^n-1} $$ where n is the project lifetime and i is the annual real interest rate. HOMER assumes that all prices escalate at the same rate and applies an annual real interest rate rather than a nominal interest rate. The discount factor (fd) is a ratio used to calculate the present value of a cash flow that occurs in any year of the project lifetime. HOMER calculates the discount factor by using the following equation [25]: $$ {f}_{\mathrm{d}}=\frac{1}{{\left(1+i\right)}^n} $$ NPC estimation in HOMER also considers the salvage cost, which is the residual value of the power system components at the end of the project lifetime. The equation used to calculate the salvage value (S) is as follows: $$ s=\mathrm{rep}\left(\frac{\mathrm{rem}}{\mathrm{comp}}\right) $$ where rep is the replacement cost of the component, rem is the remaining lifetime of the component and comp is the lifetime of the component. Simulation configuration The lifetime of the project is 20 years, which represents the lifetime of the BS equipment and the solar cells. Between all the components in a base station, solar cells have been found to be the most expensive in terms of capital cost. Hence, solar cells have a lifetime of 20 years, which is the same as the project lifetime, so neither BS nor the solar cells require replacement during the 20-year period. In addition, the next-generation network, i.e. fifth generation (5G), is predicted to be implemented in the next 20 years, based on historical evolution from the previous legacy network cycle, e.g. 3G and 2G. The Malaysian annual real interest rate was 3.25% in 2014 [28]. HOMER makes a decision in each time step to meet the power needs at the lowest cost, subject to the constraints of the dispatch strategy chosen in the simulation and a set point of 80%. The system must supply electricity to both the load (base station system) and the backup power system each hour. In the present study, the backup power needs 10% of the hourly load requirement to retain enough spare capacity to serve the load even under a sudden 10% decrease in the renewable energy output within an hour. Moreover, several sets of sizes will be considered in the simulation, taking into account the SPV, inverter and number of batteries needed to achieve cost-effective, reliable and efficient performance in the optimisation process. The efficiency of the inverter is assumed to be roughly constant over the working range (e.g. 90%, based on [27]), and the battery efficiency is taken to be 85%. The DG configurations are based on [2], where the DG cost is $660/kW. The replacement cost is assumed to be $660/kW, and the operating and maintenance costs are $0.05/h. The diesel price is $0.7/L, and the carbon emission penalty is internationally set to $2.25/t. The technical specifications, costs, economic parameters and system constraints that are used in the present study are given in more detail in Table 3 below. Table 3 Simulation setup of the SPV/DG hybrid system Different average daily solar radiation values of 5.1, 5.2, 5.3, 5.4 and 5.5 kWh/m2 are used to simulate the application of solar energy across a wide range of Malaysian states (for a detailed discussion, see the first paragraph of Section 4). The total power consumption by the LTE-BS is 965 W (details given in Table 2). Additional configuration details are given in Table 3. The energy output, the economic analysis of the proposed hybrid systems and the related sensitivity analysis are provided in the following paragraphs. Optimisation criteria Table 4 includes a summary of the technical and economic criteria for the optimal design of the hybrid SPV/DG system at different daily radiation values. Table 4 Summary of the technical and economic criteria for the optimal design of the hybrid SPV/DG system The optimal size of the solar energy system is obviously the same for all solar radiation rates proposed (5.1 to 5.5 kWh/m2/day) for the same capacity of the DG (1 kW). However, the energy contribution differs, with the contribution of energy from the solar power system increasing with increasing radiation rate. This increase will decrease the energy contribution of the DG, lowering its operating period and providing the benefits of reducing both the operating costs and pollution rate. The system costs consist of the following: (i) the initial capital cost is paid at the beginning of the project and decreases with decreasing size of the elements of the project, with the largest proportion of these costs going towards solar cells because of their high cost (approximately $4/W). Table 4 shows that the initial capital cost is fixed because the optimal system size is the same for all average daily solar radiation values studied. (ii) The operating cost is paid annually, and most of this cost goes towards operating and maintaining the DG. Table 4 indicates that the operating cost decreases with increasing solar radiation at the same optimal size of the system due to the increase in the energy contribution from the solar power system and the decrease in the energy contribution of the DG, which reduces the operating period of the DG. The NPC represents all costs that occur within the project lifetime, including initial setup costs, component replacements within the project lifetime and maintenance. More details will be provided in the next subsections. Energy yield analysis Figure 8 summarises the annual energy contribution of the solar electric system and the DG at different average daily solar radiation values. Higher solar radiation rates clearly correspond to higher annual energy contributions of the solar power system for the same SPV size stem series and less energy consumed by the generator. Thus, a higher solar radiation rate is desirable, as it decreases the maintenance and operational costs for the DG and the emission of polluting gases into the environment. The annual energy contribution of the solar electric system ranged from 43% to 47% for an average daily solar radiation value ranging from 5.1 to 5.5 kWh\m2, respectively. Annual energy contribution of various sources for different average solar radiation values. The following statistical analysis discusses the energy production based on an average daily solar radiation for Malaysia of 5.1 kWh/m2 as a case study based on the equations that are given in Section 6. However, this analysis can be extended to other cases, yielding a slight difference in daily peak solar hours per case depending on the average daily solar radiation. The average annual energy consumption by LTE-macro BS is 8,453 kWh, which is computed as an LTE-macro BS load of 965 W, as given in Table 2 × 24 h × 365 days/year. The annual energy contribution of the solar system to the total energy production is 4,290 kWh, which is computed based on Equation 1. SPV rated capacity is 2 kW × peak solar hours 5.1 h × SPV derating factor 0.9 × 365 days/year, which equals 3,351 kWh. However, the tracking system plays a role in increasing the total amount of energy produced by a solar system by approximately 20% to 25%. The present simulation adopted a dual axis tracker which increased the total amount of energy by approximately 21.90% to become the annual energy contribution of the solar system at 4,290 kWh. The DG covered the remaining portion of the energy by 5,573 kWh, which represents 57% of the total energy production. The total annual energy production of the hybrid system is 9,863 kWh (4,290 kWh of solar system + 5,573 kWh of DG), while the total annual energy needed by LTE-macro BS is 8,453 kWh. The difference between electrical production and consumption is equal to the excess electricity of 888 kWh/year, plus the battery losses of 112 kWh/year, plus inverter losses of 410 kWh/year. The maximum energy contribution of the solar power system occurred in March and August. The average monthly energy contribution ranged from 510 to 564 kWh, and the average monthly energy contribution of the DG ranged from 675 to 636 kWh at 5.1 and 5.5 kWh/m2/day, respectively. Meanwhile, the minimum energy contribution of the solar power system occurred in October and February. The average monthly energy contribution was 467 kWh at 5.1 kWh/m2/day and 564 kWh at 5.5 kWh/m2/day. These results are attributed to the differences in the average solar radiation rate, as the second part of the year (October to February) includes more cloud cover and thus lower solar potential than the first part of the year (March to October). The number of batteries in a string size is four batteries in parallel arrangement where the battery annual energy-in is 770 kWh, while the annual energy-out is 655 kWh, where the round-trip efficiency was 85%. Batteries can supply LTE-BS load autonomy for 6.27 h, which is computed based on Equation 5, (number of the batteries is 4 × nominal voltage of a single battery 6 V × nominal capacity of a single battery 360 Ah × 0.7 × 24) divided by (daily average LTE-BS load 23.2 kWh). However, one battery can supply LTE-BS load autonomy 1.57 h. The battery expected life is 6 years, based on Equation 6, number of the batteries 4 × battery lifetime throughput 1,075 kWh divided by battery annual throughput (713 kWh). Moreover, the inverter annual energy-in is 3,545 kWh, while the annual energy-out is 3,191 kWh, with 90% efficiency and 6,047 h/year operation. The following analysis discusses the cash flow based on an average daily solar radiation for Malaysia of 5.1 kWh/m2 as a case study, as shown in Figure 9. However, this analysis can be extended to other cases, yielding a slight difference in operating cost per case depending on the energy contribution of the solar power system, as explained in the above subsections. Cash flow summary of the SPV/DG hybrid power system within the project lifetime for a solar radiation value of 5.1 kWh/m 2 /day. The initial capital cost, paid once at the beginning of the project, is directly proportional to the size of the system. From Table 4, the initial capital cost is fixed at $11,210 because the optimal design found by the HOMER software for the hybrid power system is the same for all daily solar radiation values studied. The breakdown of this cost is as follows: (i) 71.4% for the SPV (size 2 kW × cost $4,000/1 kW = $8,000), (ii) 5.9% for the DG (size 1 kW× cost $660/1 kW = $660), (iii) 10.7% for the battery units (4 units × cost $300/unit = $1,200) and (iv) 12% for the inverter (size 1.5 kW × cost $900/1 kW = $1,350). In addition, the charger controller costs $2,000, and the control system cost depends on the system chosen. The SPV clearly represents the bulk of this cost. The annual cost for the maintenance and operation of the system amounted to $1,707, as shown in Figure 9. Here, the DG represents the bulk of this cost ($1,621/year, including the fuel cost). A breakdown of this cost is $305 for DG maintenance per year based on a DG maintenance cost of $0.05/h (as given in Table 3) × annual DG operating hours 6,091 h (as given in Table 4) and a fuel cost of $1,316, equal to a total diesel consumption of 1,880 L per year, as shown in Table 4 (computed based on a specific fuel consumption of 0.388 L/kWh × annual electrical production of the DG of 5,573 kWh/year at 5.1 kWh/m2/day, as given in Figure 8) multiplied by the diesel price of $0.7/L. Thus, lower DG energy consumption and increased reliance on the solar power system create more operational expenses and are achieved under a higher solar radiation rate. Figure 10 highlights on the annual diesel cost for each solar radiation rate. Fuel cost vs. global solar radiation rate. Regarding the other components, the batteries, SPV and inverter cost $40/year, $20/year and $15/year, respectively. Generally, the bulk of the replacement cost goes to the components with short operational lifetimes, which are the DG (4 years) and batteries (expected 6 years). The DG will be changed in years 5, 9, 13 and 17 at a cost of $660, while the batteries will be changed at a cost of $1,200 (4 units × $300/unit) in years 7 and 13. Moreover, the cost of replacing the inverter in year 15 is $1,350 (1.5 kW × $900/1 kW). The SPV array has a lifetime of 20 years, which is the same as the project lifetime, so neither requires replacement. Thus, the total replacement cost during the project lifetime is $7,590. The economic analysis described above has been conducted on the basis of the nominal system. However, Figure 11, below, shows the discount factor for each year of the project lifetime based on Equation 9. Discount factor for each year of the project lifetime. The NPC of a system is the present value of all the costs that it incurs over its lifetime, minus the present value of all the revenue that it earns over its lifetime. Costs include capital costs, replacement costs, O&M costs and fuel costs. Revenues include salvage value. The total NPC calculates by summing up the total discounted cash flows in each year of the project lifetime as follows: include capital costs $11,210 + replacement costs $5,198 + O&M costs $5,680 + fuel costs $19,138 − salvage $1,038, equal $40,188 for solar radiation rate 5.1 kWh/m2. Figure 12 describes a comparison of the solar radiation rate and total NPC. A comparison of the solar radiation rate and NPC. The NPC clearly decreases with increasing solar radiation at the same optimal size of the system due to the increase in the energy contribution from the solar power system and the decrease in the energy contribution of the DG, which reduces the operating period of the DG, hence decreases the total operating and maintaining cost as well the fuel cost. Due to improved manufacturing techniques and higher volumes, the carbon footprint of solar panels is much lower than it once was. In general, as the solar radiation rate increases, the pollution rate decreases because the energy consumption of the DG and thus the operating period of the DG decreases. Figure 13 compares the solar radiation rate and annual pollutant emissions. Pollutant emissions vs. global solar radiation rate. Comparison of the feasibility of using the solar energy approach between Germany and Malaysia Germany is the world's top solar photovoltaic consumer. The annual number of daylight hours stands at approximately 1,300 to 1,900 h, and the annual average solar radiation is approximately 1,100 kWh/m2, where the level of average solar radiation ranges from 2.6 to 3.7 kWh/m2. Hence, the average daily solar radiation is estimated 3 kWh/m2 [29]. In addition, Germany represents a four-season country climate, which is useful for comparison purposes to validate the feasibility of the proposed off-grid solar energy approach in two different climates, i.e. tropical (Malaysia) and four season (German). The comparison is based on three key aspects: technical criteria, energy yield analysis and economic analysis. By taking into account the main differences in the average daily solar radiation, annual real interest rate of 1% [30], SPV cost of $2/watt [31] and diesel price $1.65/L in Germany [32] and $0.70/L in Malaysia. Table 5 includes a summary of the technical and economic criteria for the optimal design of the hybrid SPV/DG system for both Malaysia and Germany. The lower solar radiation rates lead to larger solar system size. For Germany, the average daily solar radiation is 3 kWh/m2, which is equivalent to 58.8% of the average daily solar radiation in Malaysia. The designed system specification using the HOMER software consists of a 3.5 kW SPV array, a 1 kW DG, a 2.16 kAh/6 V storage battery (360 Ah × 6 units) and a 1.5 kW inverter for the same LTE-BS load. Although the solar system size for Germany is larger than the solar system size for Malaysia (as shown in Table 5), the initial capital cost in Germany is less than the initial capital cost in Malaysia because the price of solar system components is cheaper in Germany. However, the NPCGermany is higher than NPCMalaysia, which is approximately doubled. The main reason behind the doubling is the price of fuel, as will be explained in the following paragraphs. Figure 14 summarises the annual energy contribution of the solar electric system and the DG. The annual energy contribution reached 43% (4,290 kWh) of the solar system based on the characteristics of Malaysia's solar radiation exposure, and 57% (5,573) of the DG where the total annual energy production was 9,863 kWh. While, the annual energy contribution amounted to 42% (4,094 kWh) of the solar system based on the characteristics of Germany's solar radiation exposure, and 58% (5,667 kWh) of the DG where the total annual energy production was 9,761 kWh. The annual energy contribution of different energy sources for Malaysia and Germany. Figure 15 describes the comparison of all of the costs that occur within the project lifetime for both Malaysia and Germany. The initial capital cost is directly proportional to the size of the system. However, the initial capital cost of the project in Germany is lower than in Malaysia because the price of the solar system components in Germany is lower than in Malaysia. The total maintenance and operation costs for Germany are higher than for Malaysia because the size of the solar system larger than the Malaysian solar system leads to high operating and maintenance costs. Replacement costs depend significantly on the discount factor which is given in Equation 9. If the real interest rate increases, the discount factor will decrease, and the replacement cost will also decrease. The real interest rate for Malaysia is 3.25% and for Germany is 1% for quarter 3 of 2014, so the discount rate for Malaysia is higher than for Germany which is reflected in the total replacement costs. The fuel cost represents the bulk of the NPC, which amounted to 48% of the total project cost for Malaysia and 70% for Germany. Hence, NPCGermany is double the value of NPCMalaysia. NPC estimation in HOMER also considers the salvage cost (based on Equation 10), which is the residual value of the power system components at the end of the project lifetime. NPC comparison between Malaysia and Germany. This study examined the feasibility of the integration of a solar power system with a DG to supply power to remote BSs in off-grid sites of Malaysia to minimise both the OPEX and carbon emissions. Three key aspects have been investigated: (i) energy yield analysis, (ii) economic analysis and (iii) greenhouse gas emissions. When the solar radiation rates increase, the energy produced from the solar power system will increase, which has the positive effect of reducing the emission of polluting gases and the operating cost of the system. In summary, the bulk of the initial cost of the system is the cost of the SPV due to its inherent high cost. Meanwhile, the bulk of the annual operating and maintenance costs go to the DG, and the bulk of the replacement cost goes to the DG and batteries. However, the average annual OPEX savings of the hybrid system was 43% to 47% depending on the solar radiation rate. Comparison between Malaysia and Germany shows that Malaysia's climatic conditions are desirable for wide utilisation of the proposed off-grid hybrid system due to the high amount of solar radiation received throughout the year, in addition to lower project cost in the long run. International Telecommunication Union (ITU) statistics database; available: http://www.itu.int/en/ITU-D/Statistics/Pages/stat/default.aspx, accessed 26.Aug.2014. K Kanzumba, JV Herman, Hybrid renewable power systems for mobile telephony base stations in developing countries. Renew Energ. 51(2013), 419–425 (2013) C Tao, Y Yang, H Zhang, H Kim, K Horneman, Network energy saving technologies for green wireless access networks. IEEE Commun. Mag. 18(5), 30–38 (2011) MH Alsharif, R Nordin, M Ismail, Classification, recent advances and research challenges in energy efficient cellular networks. Wireless Pers. Commun. 77(2), 1249–1269 (2014) H Ziaul, H Boostanimehr, VK Bhargava, Green cellular networks: a survey, some research issues and challenges. IEEE Commun. Surv. Tut. 13(4), 524–540 (2011) P Subodh, DS Madhu, A Muna, SN Jagan, Technical and economic assessment of renewable energy sources for telecom application: a case study of Nepal Telecom, in Proc of 5 th International Conference on Power and Energy Systems (Nepal, Kathmandu, 2013). pp. 28–30 GeSI, Global e-sustainability initiative: SMART 2020: Enabling the low carbon economy in the information age. (2008); available: http://www.smart2020.org/_assets/files/02_smart2020Report.pdf, accessed 12. Nov. 2014 S Rath, SM Ali, MN Iqbal, Strategic approach of hybrid solar-wind power for remote telecommunication sites in India. Int. J. Sci. Eng. Res. 3(6), 1–6 (2012) P Nema, R Nema, S Rangnekar, Minimization of green house gases emission by using hybrid energy system for telephony base station site application. Renew Sustain Energ. Rev. 14(6), 1635–1639 (2010) AV Anayochukwu, EA Nnene, Simulation and optimization of hybrid diesel power generation system for GSM base station site in Nigeria. Elec. J. Energ. Environ. 1(1), 37–56 (2013) WA Imtiaz, K Hafeez, Stand alone PV system for remote cell site in Swat Valley, in Proc of 1 st Abasyn International Conference on Technology and Business Management, Pakistan, 2013 S Moury, MN Khandoker, SM Haider, Feasibility study of solar PV arrays in grid connected cellular BTS sites, in Proc of 2012 International Conference on Advances in Power Conversion and Energy Technologies (APCET), Mylavaram, Andhra Pradesh, 2012, pp. 1–5 M Martínez-Díaz, R Villafáfila-Robles, D Montesinos-Miracle, A Sudrià-Andreu, Study of optimization design criteria for stand-alone hybrid renewable power systems, in Proc of International Conference on Renewable Energies and Power Quality (ICREPQ'13), 2013 P Bajpai, N Prakshan, N Kishore, Renewable hybrid stand-alone telecom power system modeling and analysis, in Proc of IEEE TENCON Conference, 2009 VA Ani, AN Nzeako, Potentials of optimized hybrid system in powering off-grid macro base transmitter station site. Int. J. Renew Energ. Res. 3(4), 861–871 (2013) S Mekhilefa, A Safaria, WES Mustaffaa, R Saidurb, R Omara, MAA Younisc, Solar energy in Malaysia: current state and prospects. Renew Sustain Energ. Rev. 16(1), 386–396 (2012) K Sopian, MY Othman, A Wirsat, The wind energy potential of Malaysia. Renew Energ. 6(8), 1005–1016 (1995) H Borhanazada, S Mekhilefa, R Saidurb, G Boroumandjazib, Potential application of renewable energy for rural electrification in Malaysia. Renew Energ. 59(2013), 210–219 (2013) H Borhanazada, S Mekhilefa, R Saidurb, VG Ganapathy, Long-term wind speed forecasting and general pattern recognition using neural networks. IEEE T. Sustain. Energ. 5(2), 546–553 (2014) WA Azhari, K Sopian, A Zaharim, M Al Ghoul, A new approach for predicting solar radiation in tropical environment using satellite images e case study of Malaysia. WSEAS Trans. Environ. Dev. 4, 4 (2008) T Khatib, A Mohamed, K Sopian, M Mahmoud, Solar energy prediction for Malaysia using artificial neural networks. Int. J. Photoenergy, vol. 2012, Article ID 419504, doi:10.1155/2012/419504, 1-16 (2012) Green Power for Mobile, GSMA, Community Power Using Mobile to Extend the Grid; Available: http://www.gsma.com/mobilefordevelopment/wp-content/uploads/2012/05/Community-Power-Using-Mobile-to-Extend-the-Grid-January-2010.pdf, accessed 18. Nov. 2014. G Auer, O Blume, V Giannini, I Godor, MA Imran, Y Jading, E Katranaras, M Olsson, D Sabella, P Skillermark, W Wajda, Energy efficiency analysis of the reference systems, areas of improvements and target breakdown. EARTH project report, Deliverable D2.3, 2010 G Schmitt, The green base station, in Proc. of 4th International Conference on Telecommunication - Energy Special Conference (TELESCON), 1–6, 2009 T Lambert, P Gilman, P Lilienthal, Micropower system modeling with HOMER (2006); available: http://homerenergy.com/documents/MicropowerSystemModelingWithHOMER.pdf, accessed 18. Nov. 2014. Trojan Battery Company; available: http://www.trojanbattery.com/, accessed 18. Nov. 2014. S Rehman, LM Al-Hadhrami, Study of a solar PVedieselebattery hybrid power system for a remotely located population near Rafha, Saudi Arabia. Energ. 35(12), 4986–4995 (2010) Central Bank of Malaysia; available: http://www.bnm.gov.my/index.php?ch=statistic&pg=stats_convinterbkrates, accessed 28.Aug.2014. International Energy Agency (IEA), Solar Energy Perspectives; available http://www.iea.org/publications/freepublications/publication/solar_energy_perspectives2011.pdf, accessed 18. Nov. 2014. Organization for Economic Co-operation and Development (OECD); available: http://stats.oecd.org/index.aspx?queryid=86, accessed 18. Nov. 2014. International Renewable Energy Agency (IRENA), Renewable Energy Technologies: Cost Analysis Series; available: http://www.irena.org/DocumentDownloads/Publications/RE_Technologies_Cost_Analysis-SOLAR_PV.pdf, accessed 18. Nov. 2014. Fuel prices europe information; available: http://www.fuel-prices-europe.info/, accessed 17. Nov. 2014. The authors would like to thank the Universiti Kebangsaan Malaysia for the financial support of this work, under the Grant Ref: ETP-2013-072. Department of Electrical Electronics and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, 43600, Bangi, Selangor, Malaysia Mohammed H Alsharif , Rosdiadee Nordin & Mahamod Ismail Search for Mohammed H Alsharif in: Search for Rosdiadee Nordin in: Search for Mahamod Ismail in: Correspondence to Mohammed H Alsharif. Mobile base station Off-grid hybrid energy systems Energy Harvesting Wireless Communications
CommonCrawl
Stochastic Systems Stoch. Syst. Volume 2, Number 1 (2012), 208-231. Stability of a Markov-modulated Markov chain, with application to a wireless network governed by two protocols Sergey Foss, Seva Shneer, and Andrey Tyurlikov More by Sergey Foss More by Seva Shneer More by Andrey Tyurlikov Enhanced PDF (253 KB) We consider a discrete-time Markov chain $(X^{t},Y^{t})$, $t=0,1,2,\ldots$ , where the $X$-component forms a Markov chain itself. Assume that $(X^{t})$ is Harris-ergodic and consider an auxiliary Markov chain $\{\widehat{Y}^{t}\}$ whose transition probabilities are the averages of transition probabilities of the $Y$-component of the $(X,Y)$-chain, where the averaging is weighted by the stationary distribution of the $X$-component. We first provide natural conditions in terms of test functions ensuring that the $\widehat{Y}$-chain is positive recurrent and then prove that these conditions are also sufficient for positive recurrence of the original chain $(X^{t},Y^{t})$. The we prove a "multi-dimensional" extension of the result obtained. In the second part of the paper, we apply our results to two versions of a multi-access wireless model governed by two randomised protocols. Stoch. Syst., Volume 2, Number 1 (2012), 208-231. First available in Project Euclid: 24 February 2014 https://projecteuclid.org/euclid.ssy/1393252044 doi:10.1214/11-SSY030 Primary: 93E15: Stochastic stability Secondary: 60K25: Queueing theory [See also 68M20, 90B22] 68M20: Performance evaluation; queueing; scheduling [See also 60K25, 90Bxx] Stochastic stability Markov-modulated Markov chain Foss, Sergey; Shneer, Seva; Tyurlikov, Andrey. Stability of a Markov-modulated Markov chain, with application to a wireless network governed by two protocols. Stoch. Syst. 2 (2012), no. 1, 208--231. doi:10.1214/11-SSY030. https://projecteuclid.org/euclid.ssy/1393252044 [1] Abramson, N. (1970). The ALOHA System - Another Alternative for Computer Communications. AFIPS Conference Proceedings. 36, 295–298. [2] Andreev, S., Dubkov, K., Turlikov, A. (2010). IEEE 802.11 and 802.16 cooperation within multi-radio stations. Wireless Personal Communications, Springer Science+Business Media B.V., 1–19. [3] Berlemann, L., Hoymann, C., Hiertz, G.R., Mangold, S. (2006). Coexistence and interworking of IEEE 802.16 and IEEE 802.11(e). IEEE 63rd Vehicular Technology Conference, 1, 27–31. [4] Bonald, T., Borst, S., Hedge, N., Proutiere, A. (2004). Wireless Data Performance in Multi-Cell Scenarios. Proc. of ACM SIGMETRICS, 378–387. [5] Bordenave, C., McDonald, D., Proutiere, A. (2008). Performance of Random Muedium Access Control, An Asymptotic Approach. Proc. of ACM SIGMETRICS/Performance, 1–12. Digital Object Identifier: doi:10.3934/nhm.2010.5.31 [6] Borovkov, A.A. (1998). Ergodicity and Stability of Stochastic Processes, Wiley. [7] Borst, S., Jonckheere, M., Leskelä, L. Stability of parallel queueing systems with coupled service rates. Discrete Event Dynamic Systems, 18, 4, 447–472. Digital Object Identifier: doi:10.1007/s10626-007-0021-4 [8] Bramson, M. (2008). Stability of queueing networks, Probab. Surveys, 5, 169–345. Digital Object Identifier: doi:10.1214/08-PS137 Project Euclid: euclid.ps/1220879338 [9] Dai, J.G. (1995). On positive Harris recurrence of multiclass queueing networks: a unified approach via fluid limits. Annals Applied Probability, 5, 49–77. Digital Object Identifier: doi:10.1214/aoap/1177004828 Project Euclid: euclid.aoap/1177004828 [10] Dai, J.G. and Meyn, S.P. (1995). Stability and convergence of moments for multiclass queueing networks via fluid limit models. IEEE Transactions on Automatic Control., 40, 11, 1889–1904. Digital Object Identifier: doi:10.1109/9.471210 [11] Ephremides A., Hajek, B. (1998). Information Theory and Communication Networks: an Unconsummated Union. IEEE Transactions on Information Theory. 44, 2416–2434. Digital Object Identifier: doi:10.1109/18.720543 [12] Foss, S., Konstantopoulos, T. (2004). An overview of some stochastic stability methods. Journal of Operation Research Society Japan, 47, 275–303. [13] Gamarnik, D. (2004). Stochastic bandwidth packing process: stability conditions via Lyapunov function technique. Queueing systems, 48, 339–363. Digital Object Identifier: doi:10.1023/B:QUES.0000046581.34849.cf [14] Gamarnik, D., Squillante, M. (2005). Analysis of stochastic online bin packing processes. Stochastic Models, 21, 401–425. Digital Object Identifier: doi:10.1081/STM-200057127 [15] Kifer, Yu. (1986). Ergodic theory of random transformations. Progress in Probability and Statistics, Birkhauser. [16] Lindvall, T. (2002). Lectures on the Coupling Method, 2nd edition, Dover. [17] Litvak, N., Robert, P. (2012). A scaling analysis of a cat and mouse Markov chain. Annals of Probability, 22, 2, 792–826. Digital Object Identifier: doi:10.1214/11-AAP785 [18] Meyn, S.P. (2007). Control Techniques for Complex Networks, Cambridge University Press. [19] Meyn, S.P., Tweedie, R.L. (1993). Markov Chains and Stochastic Stability, Springer Verlag. [20] Mikhailov, V.A., Tsybakov, B.S. (1979). Ergodicity of a Slotted ALOHA System. Problems of Information Transmission, 15, 301–312. [21] Roberts, L. (1972). ALOHA Packet System with and without Slots and Capture. Advanced research projects agency, Network information center, Tech. Rep. ASS Note 8. [22] Shah, D., Shin, J. (2012). Randomized scheduling algorithm for queueing networks. Annals of Applied Probability, to appear. [23] Thorisson, H. (2000). Coupling, Stationarity, and Regeneration, Springer Verlag. [24] Walke, B., Mangold, S., Berlemann, L. (2007). IEEE 802 wireless systems: Protocols, multi-Hop mesh/relaying, performance and spectrum coexistence. NJ: Wiley. [25] Zhu, J., Waltho, A., Yang, X., Guo, X. (2007). Multi-radio coexistence: Challenges and opportunities. Proceedings of 16th International Conference on Computer Communications and Networks, 13-16, 358–364. INFORMS Applied Probability Society Harris recurrence of Metropolis-within-Gibbs and trans-dimensional Markov chains Roberts, Gareth O. and Rosenthal, Jeffrey S., The Annals of Applied Probability, 2006 Long Range Exclusion Processes Liggett, Thomas M., The Annals of Probability, 1980 Some Asymptotic Results in a Model of Population Growth II. Positive Recurrent Chains Singer, Burton, The Annals of Mathematical Statistics, 1971 Invariant Measures on Some Markov Processes Yang, Y. S., The Annals of Mathematical Statistics, 1971 Some Asymptotic Results in a Model of Population Growth Loi Fonctionnelle du Logarithme Itere Pour les Processus de Markov Recurrents Touati, Abderrahmen, The Annals of Probability, 1990 Kemeny's constant for one-dimensional diffusions Pinsky, Ross, Electronic Communications in Probability, 2019 Markov processes with restart Avrachenkov, Konstantin, Piunovskiy, Alexey, and Zhang, Yi, Journal of Applied Probability, 2013 Conditions for recurrence and transience of a Markov chain on $\mathbb{Z}^+$ and estimation of a geometric success probability Hobert, James P. and Schweinsberg, Jason, The Annals of Statistics, 2002 Recurrence and transience property for a class of Markov chains Sandrić, Nikola, Bernoulli, 2013 euclid.ssy/1393252044
CommonCrawl
Article ID 0001 February 2019 Brief communication A new find of calc-alkaline lamprophyres in Thanewasna area, Western Bastar Craton, India Meshram R R Dora M L Naik R Shareef M Gopalakrishna G Meshram T Baswani S R Randive K R Lamprophyre dykes within the granitoid and charnockite are reported for the first time from the Western Bastar Craton, Chandrapur district, Maharashtra. It shows porphyritic–panidiomorphic texture under a microscope, characterised by the predominance of biotite phenocrysts with less abundance of amphibole and clinopyroxene microphenocryst. The groundmass is composed more of K-feldspars over plagioclase, amphiboles, clinopyroxene, biotite, chlorite, apatite, sphene and magnetite. The mineral chemistry of biotite and magnesio-hornblende is indicative of minette variety of calc-alkaline lamprophyre (CAL), which is further supported by preliminary major oxides and trace element geochemistry. This unique association of CAL with granitoid provides an opportunity to study the spatio-temporal evolution of the lamprophyric magma in relation to the geodynamic perspective of the Bastar Craton. Article ID 0002 February 2019 Research Article Evaluation of TanDEMx and SRTM DEM on watershed simulated runoff estimation Nagaveni Chokkavarapu Pavan Kumar K Venkata Ravibabu Mandla In hydrological models, digital elevation models (DEMs) are being used to extract stream network and delineation of the watershed. DEMs represent elevation surfaces of earth landscape. Spatial resolution refers to the dimension of the cell size representing the area covered on the ground. Spatial resolution is the main parameter of a DEM. The grid cell size of raster DEM has significant effects on derived terrain variables such as slope, aspect, curvature, the wetness index, etc. Selection of appropriate spatial resolution DEM depends on other input data being used in the model, type of application and analysis that needs to be performed, the size of the database and response time. Each DEM contains inherent errors due to the method of acquisition and processing. The accuracy of each DEM varies with spatial resolution. The present paper deals with Shuttle Radar Topography Mission (SRTM), TerraSAR-X add-on for Digital Elevation Measurements (TanDEM DEMs) and compares their watershed delineation, slope, stream network and height with ground control points. It was found that the coarse resolution DEM-derived attributes and terrain morphological characteristics were strongly influenced by DEM accuracy. The objective of the present study is to investigate the impact of DEM resolution on topographic parameters and runoff estimation using TanDEM-12, TanDEM-30 and SRTM-90 m with the Soil and Water Assessment Tool. The analysis of the results using different DEM resolutions gave a varied number of sub-basins, Hydrological Response Units (HRUs) and watershed areas. The results were optimum at a specific threshold value as extraction of drainage network has a significant influence on simulated results. The accuracy of DEM is important, as the source of construction of DEM is the main factor causing uncertainty in the output. The results showed variable amounts of runoff at the watershed level, which may be attributed to varied stream lengths, minimum and maximum elevations and sub-basin areas. Lithofacies correlation in Early Permian fluvial Gondwana stratigraphy of southeastern India using cross-association statistics Zahid A Khan Ram Chandra Tewari The cross-association statistical technique is used to correlate major and minor lithofacies and the corresponding facies areas in the widely separated two borehole log profiles of Early Permian succession from the Kaghaznagar and Kothagudem sub-basins of Pranhita–Godavari Graben (PGG) of southeastern India. The one-to-one correspondence of the cross-association of major lithofacies and facies areas is strikingly similar and matches significantly more than expected. It indicates the continuity of single homogenous succession deposited under an identical depositional environment in the widely separated two sub-basins of PGG. However, the dissimilarity between the micro-lithofacies, on the other hand, suggests a different sub-environment through space and time. The significant correlation of major lithofacies and facies areas of the two sub-basins suggests meandering stream depositional facies model of the Early Permian Barakar succession in PGG. It may also provide information regarding the exploration of coal. The dissimilarities of cross-association at the micro-lithofacies level may reflect the differential subsidence through space and time. Assessment of Met Office Unified Model (UM) quantitative precipitation forecasts during the Indian summer monsoon: Contiguous Rain Area (CRA) approach Kuldeep Sharma Raghavendra Ashrit Elizabeth Ebert Ashis Mitra Bhatla R Gopal Iyengar Rajagopal E N The operational medium range rainfall forecasts of the Met Office Unified Model (UM) are evaluated over India using the Contiguous Rainfall Area (CRA) verification technique. In the CRA method, forecast and observed weather systems (defined by a user-specified rain threshold) are objectively matched to estimate location, volume, and pattern errors. In this study, UM rainfall forecasts from nine (2007–2015) Indian monsoon seasons are evaluated against 0.5$^{\circ }\times$ 0.5$^{\circ }$ IMD–NCMRWF gridded observed rainfall over India (6.5$^{\circ }{-}$38.5$^{\circ }$N, 66.5$^{\circ }{-}$100.5$^{\circ }$E). The model forecasts show a wet bias due to excessive number of rainy days particularly of low amounts (<1 mm d$^{-1}$). Verification scores consistently suggest good skill the forecasts at threshold of 10 mm d$^{-1}$, while moderate (poor) skill at thresholds of <20 mm d$^{-1}$ (<40 mm d$^{-1}$). Spatial verification of rainfall forecasts is carried out for 10, 20, 40 and 80 mm d$^{-1}$ CRA thresholds for four sub-regions namely (i) northwest (NW), (ii) southwest (SW), (iii) eastern (E), and (iv) northeast (NE) sub-region. Over the SW sub-region, the forecasts tend to underestimate rain intensity. In the SW region, the forecast events tended to be displaced to the west and southwest of the observed position on an average by about 1$^{\circ }$ distance. Over eastern India (E) forecasts of light (heavy) rainfall events, like 10 mm d$^{-1}$ (20 and 40 mm d$^{-1}$) tend to be displaced to the south on an average by about 1$^{\circ }$ (southeast by 1$-2^{\circ }$). In all four regions, the relative contribution to total error due to displacement increases with increasing CRA threshold. These findings can be useful for forecasters and for model developers with regard to the model systematic errors associated with the monsoon rainfall over different parts of India. The worthiness of using information on land-use–land-cover in watershed models for Western Ghats: A case study Yadupathi Putty Mysuru R Kavya B M The variable source area (VSA) theory of runoff generation mechanisms has been proved to hold good in many wet mountainous areas, decades ago. According to this theory, infiltration-excess overland flow is limited to very small areas in mountainous and forested catchments. But, the perception that the land surface characteristics, including land-use–land-cover (LULC), form the major factors influencing the response of the catchment to rainfall has dominated the thought in hydrology to such an extent that models based on the overland flow theory continue to be used even in such areas. The present study was taken up in order to understand the worthiness of using parameters, including the curve number (CN), that are based on the physiographic characteristics of the catchment in a watershed model designed to estimate runoff in the wet mountainous areas of the Western Ghats in southern India, where the VSA theory has been proved to hold good. The study has been accomplished by applying the NITK model developed for estimating runoff using daily rainfall data. This model is believed to estimate reliably the streamflow in the region using parameter values that can be computed from catchment characteristics. In the present study, it is applied on three gauged streams in the region of Western Ghats in Karnataka. Initially, the performance of the model has been studied with the parameters fixed using the catchment characteristics. Later, the model has been used as a tool to test hypotheses concerning the catchment response, by varying the parameter values, adopting a trial and error procedure. Initial results showed that the model performance is poor as the coefficients of efficiency vary between $–$66.9 and 82%. The sensitivity analysis carried out subsequently showed that the model parameters are required to be altered greatly for good performance and that the model simulations are not sensitive to the parameter CN. Further, the performance of this model was compared with that of a VSA model, known to suit the region well. This showed that even after all the changes in the model parameters, the model results are not highly reliable. Hence, in order to understand the reasons for the poor performance of the model, a technique was developed to compute the CN values that would be actually necessary to simulate daily direct runoff (DRO) reliably in this method, the daily values of CN are computed by applying backwards the expression for runoff on the DRO estimated by the VSA model. The variations in the values of CN computed using this method are then studied. It is found that the variations in daily CN are high and highly random too, whereas the NITK model uses only three fixed values of CN. It is thus concluded that factors other than those on which the CN is popularly believed to depend control the runoff generation in the region and that influence of LULC on runoff is not discernible at all from the kind of data that is commonly available. Probabilistic seismic hazard assessment of Peshawar District, Pakistan Ahmad Hammad Khaliq Muhammad Waseem Sarfraz Khan Waqas Ahmed Muhammad Asif Khan The seismic provisions for the Building Code of Pakistan were revised after the 2005 Kashmir earthquake and these have resulted in the introduction of a macrozonation ground motion hazard map in the seismic provisions. The macrozonation map proposes a peak ground acceleration (PGA) for the return period of 475 yr for Pakistan for flat rock sites. After the macrozonation, the next step is to develop the surface ground motion assessment studies for the cities, districts and tehsils of Pakistan. In this study, the probabilistic seismic hazard analysis (PSHA) approach is used for the Peshawar District. The PSHA, consistent with the classical Cornell approach, is carried out to obtain the seismic hazard curves and uniform hazard spectra of PGA values for the return periods of 150, 475, 975 and 2475 yr at a grid spacing of 0.1$^{\circ }\times$ 0.1$^{\circ }$. The PGA for Peshawar at 150, 475, 975 and 2475 yr return period is estimated as 0.23, 0.34, 0.39 and 0.45g, respectively, for rock flat outcrop site conditions. The surface ground motion maps proposed in this study incorporate the local soil effects using amplification factors based on shear wave velocity obtained as a proxy to the topographic slope. The resultant ground surface hazard assessment proposes the PGA value of 0.63g for the return period of 475 yr and 0.89g for the return period of 2475 yr. The maps developed in the current study are important inputs for the structural designing, risk assessment and land use planning of the Peshawar District. Mapping sediment thickness of the Abbottabad basin, Pakistan Zahid Hussain Sarfraz Khan Muhammad Asif Khan Muhammad Waseem Waqas Ahmed The Abbottabad basin is mainly composed of different loose and indurated sediments such as fine to medium grain silt and clay and large to medium sized boulders and cobbles, occupying a low land between the hills. These sediments are primarily stream deposits and variably compacted in the form of rock, suggested name Havelian group after their maximum thickness into Havelian area. Numerous streams converge at the Abbottabad intermontane basin from the north–northeast and join to form a single channel that passes through a narrow gorge on the western side of the Sirban hill. Geomorphically, the Abbottabad city is underlain by a thick sequence of loose Quaternary–Recent alluvial sediments, making it vulnerable to seismic hazards. This research determines the sediment thickness for the Abbottabad basin using a geophysical approach. In this regard, thirteen lithologic profiles were developed in the Abbottabad basin at different locations. These profiles were ultimately combined to develop a Fence diagram showing a generalized stratigraphic pattern of the Quaternary–Recent unconsolidated sediments in the basin. Standard Penetration Test (SPT) and H/V analysis were used to characterize the site and shear wave velocity at a different location of Abbottabad basin and surrounding area. Based on H/V data (using Tromino Engy Plus instrument) Abbottabad basin and immediate surroundings have an average fundamental frequency from 0.5 to 9 Hz, which represents the deposition of alluvial sediments (i.e., stiff and dense soil). Source characteristics of the upper mantle 21 May, 2014 Bay of Bengal earthquake of $M_{w}$5.9 Prantik Mandal Koushik Biswas Akhileshwar Prasad We measure source parameters for the 21 May, 2014 Bay of Bengal earthquake through inversion modeling of S-wave displacement spectra from radial–transverse–vertical (RTZ) components recorded at ten broadband stations in the eastern Indian shield. The average source parameters are estimated using estimates from seven near stations (within epicentral distances $\leq$ 500 km). The average seismic moment and source radius are determined to be 1.0$\times$ 10$^{18}$ N-m and 829 m, respectively, while average stress drop is found to be 76.5 MPa. The mean corner frequency and moment magnitude are calculated to be 1.6 $\pm$ 0.1 and 5.9 $\pm$ 0.2 Hz, respectively. We also estimated mean radiated energy and apparent stress, which are found to be 6.1$\times$10$^{13}$ joules and 1.8 MPa, respectively. We observe that mean $E_{s}$/$M_{o}$ estimate of 5.5 $\times$ 10$^{-5}$ is found to be larger than the global average for oceanic strike-slip events. This observation along with large stress drop and apparent stress estimates explains the observed remarkably felt intensity data of the 2014 event. The full waveform moment tensor inversion of the band-passed (0.03–0.12 Hz) broadband displacement data suggests the best fit for the multiple point sources on a plane located at 65 km depth, with a moment magnitude 6.4, and a focal mechanism with strike 318$^{\circ}$, dip 87$^{\circ}$, and rake 34$^{\circ}$. Analysis of deformation characteristics and stability mechanisms of typical landslide mass based on the field monitoring in the Three Gorges Reservoir, China Yonggang Zhang Shuyun Zhu Weiqiang Zhang Hui Liu Based on a large number of data including GPS monitoring of surface deformation and inclinometer monitoring of internal deformation over 7 years, we find that the displacement of a typical landslide mass has the stepped evolution characteristics as: the variation of the reservoir water level under the different years and months in the Three Gorges Reservoir and the deformation of landslide mass surges in the flood season. On the contrary, the deformation of landslide mass slows down in the non-flood season. Especially, in 2007, 2009 and 2011, the fluctuation of the surface monitoring displacement is more intense than that in the other years. In addition, the whole landslide mass has a characteristic of the trial-type sliding. The surface displacement is greater than the internal displacement. Based on that, deformation characteristics, stability mechanisms and the influencing factors of landslide mass are studied deeply. The results show that the drawdown of the water level of the Three Gorges Reservoir region is the main controlling factor of the deformation of the landslide mass. The results of the study have a significant value of reference on the stability analysis of landslide mass under the similar engineering geological conditions. Zebra layers and palaeoenvironment of Late Miocene Stratum in the Linxia Basin, northwestern China Xiuqing Nian Xiuming Liu Hui Guo Zhi Liu Bin Lu Fengqing Han Miocene strata in the Linxia Basin (Gansu, China) are usually interpreted as lacustrine sediments. However, the red–grey inter-beds known as 'Zebra layers' commonly tilt with respect to the terrain on the side slopes of the modern valley, which may be due to mantling palaeotopography (similar to aeolian loess). The anisotropy of magnetic susceptibility, which reflects the original arrangement of magnetic particles in sediments, was applied to investigate this phenomenon. The results showed that the tilting of the inter-beds in the side slope was due to mantle palaeotopography rather than soil creep, and that they were not deposited in a subaqueous environment. The grain sizes of sediments showed similar features as aeolian loess. We speculate that Miocene sediments were deposited by mantling the palaeotopography where aeolian materials accumulated. After deposition, flowing water submerged these strata, which caused the side slope to become gradually thinner from top to bottom and stirred the magnetic particles in these sediments. The grey colour of the Zebra layers may not be original, as it may be due to waterlogging and deoxidation after deposition; finally, when the iron oxides in these sediments were transformed or removed, their colours became grey. The formation of Zebra layers indicates that the Late Miocene palaeoenvironment of northwestern China was similar to that in which Quaternary aeolian loess was deposited. Grain-size distribution of surface sediments of climbing and falling dunes in the Zedang valley of the Yarlung Zangbo River, southern Tibetan plateau Jiaqiong Zhang Chunlai Zhang Qing Li Xinghui Pan Climbing and falling dunes are widespread in the wide valleys of the middle reaches of the Yarlung Zangbo River. Along a sampling transect running from northeast to southwest through 10 climbing dunes and two falling dunes in the Langsailing area, the surface sediments were sampled to analyse the grain-size characteristics, to clarify the transport pattern of particles with different grain sizes, and to discuss the effects of terrain factors including dune slope, mountain slope, elevation and transport distance to sand transport. Sand dunes on both sides of the ridge are mainly transverse dunes. Fine and medium sands were the main particles, with few very fine and coarse particles in the surface sediments. Particles >4.00$\Phi$ were blown upslope by suspension, particles 1.00${-}$4.00$\Phi$ were mainly transported upslope by saltation with opposite change tendency, and particles <1.00$\Phi$ mainly moved by creep were found almost exclusively at the bottom of the slopes. As terrain factors, elevation and transport distance were more important factors influencing the distribution of grain size and particle fraction on dunes. Local winds observation might be helpful for the transport mechanism study of particles on climbing and falling dunes, while the wind data from nearby weather station was hardly helpful. Simulation of the fate and transport of boron nanoparticles in two-dimensional saturated porous media Chunmei Bai Baisha Weng Huan Sheng Lai The wide production and application of engineered nanomaterials (ENMS) inevitably lead to their release in the groundwater environment. However, the release and transport of boron nanoparticles in the multi-dimensional subsurface remain largely unknown. In this work, a multi-dimensional numerical simulator for the transport of boron nanoparticles in the saturated porous media was first developed and validated. Hypothetical scenarios for the release of boron nanoparticles into a layered two-dimensional (2D) and heterogeneous 2D saturated porous media were then explored, and compared with the fullerene nanoparticles. The results demonstrated that the soil heterogeneity influenced the fate of nanoparticles, with high permeable layers and high aqueous-phase concentration. Besides, the boron nanoparticles tend to accumulate at the inlet zones, where it was closer to a nanoparticles source. Different layers of interface interaction also impact the concentration of nanoparticles. In general, the mobility and aqueous-phase concentration of fullerene nanoparticles were higher than those of the boron nanoparticles. In addition, the mobility of boron nanoparticles was found to be sensitive to release concentration, soil porosity and nanoparticle aggregate size. Multi-year satellite observations of tropospheric NO$_{2}$ concentrations over the Indian region Madhav Haridas M K Biswadip Gharai Subin Jose Prajesh T An assessment of satellite-derived long-term tropospheric nitrogen dioxide (NO$_{2}$) data is performed over the Indian region and their implications on the regional air quality are discussed. The Indo-Gangetic plain (IGP) shows an increasing trend in NO$_{2}$ of the order of 3$\times$ 10$^{13}$ mol/cm$^{2}$/ yr. The pixel-wise (0.25 km) trend for the period 2005–2014 reveals various regions having increased rates of pollution over the study period. Further, the mean seasonal concentrations of NO$_{2}$ are segregated for different parts of the country including oceanic regions and the trends are brought out. The highest rate of increase of tropospheric NO$_{2}$ (2$\times$ 10$^{14}$ mol/cm$^{2}$/yr) is seen around coal mining areas and certain industrial areas such as ports and thermal power stations. Using the data spanning 10 years, the wavelet analysis is carried out to study the influence of semi-annual oscillations (SAO) on trace gas concentrations in different parts of the country. The study reveals that the SAO are stronger in the northern parts of India, including IGP and western India, whereas South India and oceanic regions are having very low SAO component and strong annual oscillation component. Generalised extreme value model with cyclic covariate structure for analysis of non-stationary hydrometeorological extremes Jagtap R S Gedam V K Mohan M Kale Studies carried out recently on hydrometeorological extremes report the evidence of non-stationarity induced by potential long-term climatic fluctuations and anthropogenic factors. A critical examination of the stationarity assumption has been carried out and a non-stationary generalised extreme value model with cyclic covariate structure for modelling magnitude and variation of data series with some degrees of correlation for real-world applications is proposed. Interestingly, the sinusoidal function with periodicity around 30 yr has been derived as a suitable covariate structure to deal with the ambiguous nature of temporal trends and this could possibly be linked to 'Sun cycles'. It has adequately explained the cyclic patterns recognised in the annual rainfall which are helpful for realistic estimation of quantiles. Various diagnostic plots and statistics support the usefulness of the proposed covariate structure to tackle potential non-stationarities in the data characterising extreme events in various fields such as hydrology, environment, finance, etc. Holocene environmental changes in Red River delta, Vietnam as inferred from the stable carbon isotopes and C/N ratios Nguyen Tai Tue Dang Minh Quan Pham Thao Nguyen Luu Viet Dung Tran Dang Quy Mai Trong Nhuan The present study applied stable carbon isotopes, C/N ratios, and sedimentological indicators to reconstruct environmental changes during Holocene and to test the hypothesis that $\delta^{13}$C and C/N ratios are accurate proxies of sea level change in the Red River delta (RRD), Vietnam. A 36 m long sediment core was mechanically drilled in the wave-dominated region of the RRD. The covariation of lithological characteristics, sediment grain-size distribution and geochemical proxies (LOI, TOC, C/N, $\delta^{13}$C) suggested that the sediment core could be divided into six depositional environments, consisting of sub- and inter-tidal flats (formed before 8860 cal. year BP), shelf-prodelta, delta front slope (formed from 8860 to 2290 cal. year BP), delta front platform, tidal flat, and flood plain (from 2290 to 0 cal. year BP). Covariation of $\delta^{13}$C and C/N ratios in the sediment core allowed for tracing the origin of sedimentary organic carbon, which shifted from the dominance of mangroves and C3 plants at the sub- and inter-tidal flats to marine phytoplankton at the shelf-prodelta and delta front slope. The sedimentary sources of the delta front platform, tidal flat and flood plain were a mixture of phytoplankton and C3 plants, with the later source being dominant. Arctic summer sea-ice seasonal simulation with a coupled model: Evaluation of mean features and biases Saheed P P Ashis K Mitra Imranali M Momin Rajagopal E N Helene T Hewitt Ann B Keen Sean F Milton Current state of the art weather/climate models are representation of the fully coupled aspects of the components of the earth system. Sea-ice is one of the most important components of these models. Simulation of sea-ice in these models is a challenging problem. In this study, evaluation of the hind-cast data of 14 boreal summer seasons with global coupled model HadGEM3 in its seasonal set-up has been performed over the Arctic region from 9th May start dates. Along with the biases of the sea-ice variables, related atmosphere and oceanic variables have also been examined. The model evaluation is focused on seasonal mean of sea-ice concentration, sea-ice thickness, ocean surface current, SST, ice-drift velocity and sea-ice extent. To diagnose the sea-ice biases, atmospheric variables like, 10 m wind, 2 m air temperature, sea-level pressure and ocean sub-surface temperatures were also examined. The sea-ice variables were compared with GIOMAS dataset. The atmospheric and the oceanic variables were compared with the ERA Interim and the ECMWF Ocean re-analysis (ORAP5) datasets, respectively. The model could simulate the sea-ice concentration and thickness patterns reasonably well in the Arctic Circle. However, both sea-ice concentration and thickness in the model are underestimated compared to observations. A positive (warm) bias is seen both in 2 m air temperature and SST, which are consistent with the negative sea-ice bias. Biases in ocean current and related ice drift are not related to biases in the atmospheric winds. The magnitude of the oceanic subsurface warm biases is seen to be gradually decreasing with depth, but consistent with sea-ice biases. These analyses indicate a possibility of deeper warm subsurface water in the western Arctic Ocean sector (Pacific and Atlantic exchanges) affecting the negative biases in the sea-ice at the surface. The model is able to simulate reasonably well the summer sea-ice melting process and its inter-annual variability, and has useable skill for application purpose. Mapping magnetic lineaments and subsurface basement beneath parts of Lower Benue Trough (LBT), Nigeria: Insights from integrating gravity, magnetic and geologic data Mukaila Abdullahi Upendra K Singh Ravi Roshan In this study, we present the analysis of the aeromagnetic data of parts of the Lower Benue Trough. Lineament analysis of the aeromagnetic data demonstrated four tectonic trends of the basement terrain. The lineaments are in the northeast to southwest (NE–SW), east, northeast to west, southwest (ENE–WSW), north to south (N–S), and east, southeast to west, northwest (ESE–WNW) directions. The NE–SW and ENE–WSW are the most dominant whereas the N–S and ESE–WNW are the minor trends. The estimated magnetic basement using spectral analysis vary between 3.5 and 5 km and the shallow magnetic sources (depth to top of intrusions) vary between 0.24 and 1.2 km. The result of the basement estimation from the magnetic data is comparable with the previous results from other studies as well as with the basement depth estimated from the gravity data of part of the present study area are incorporated in the study. From the gravity data, we identified sub-basin around Makurdi and basement of the sedimentary basin (5 km) is estimated using GPSO algorithm and Oasis Montaj (Geosoft). The first record of active methane (cold) seep ecosystem associated with shallow methane hydrate from the Indian EEZ Mazumdar A Dewangan P Peketi A Gullapalli S Kalpana M S Naik G P Shetty D Pujari S Pillutla S P K Gaikwad V V Nazareth D Sangodkar N S Dakara G Kumar A Mishra C K Singha P Reddy R Here we report the discovery of cold-seep ecosystem and shallow methane hydrates (2–3 mbsf) associated with methane gas flares in the water column from the Indian EEZ for the first time. The seep-sites are located in the Krishna–Godavari (K–G) basin at water depths of 900–1800 m and are characterized by gas flares in the water-column images. The occurrence of methane gas hydrates at very shallow depths (2–3 mbsf) at some of the seep-sites is attributed to high methane flux and conducive P–T conditions, necessary for the stability of methane hydrate. Chemosymbiont bearing Bivalves (Vesicomidae, Mytilidae, Thyasiridae and Solemyidae families); Polychaetes (Siboglinidae family) and Gastropods (Provannidae family) are also identified from seep-sites. Geological, geochemical and Rb–Sr isotopic studies on tungsten mineralised Sewariya–Govindgarh granites of Delhi Fold Belt, Rajasthan, NW India Sivasubramaniam R Vijay Anand Sundarrajan Pandian M S Balakrishnan S Neoproterozoic granites are widespread in the Delhi Fold Belt of the Aravalli craton, some of which are associated with tungsten mineralisation. In one such instance, the volcano-sedimentary sequence of Barotiya Group in the South Delhi Fold Belt is intruded by a pluton of biotite granite gneiss known as Sewariya Granite (SG) and later by stocks and dyke swarm of tourmaline leucogranite known as Govindgarh Granite (GG). GG magmatism was associated with wolframite mineralisation in hydrothermal quartz veins occurring along the sheared contact between SG pluton and Barotiya mica schist. SG pluton shows the evidence of ductile and brittle deformations, whereas GG is by and large undeformed. Apart from quartz and feldspars, SG contains biotite and muscovite, and GG contains muscovite, tourmaline and garnet. Although both SG and GG are peraluminous, SG has a wide range of SiO$_{2}$ and narrow range of alkalis, and GG has a narrow range of SiO$_{2}$ and a wide range of alkalis. REE (rare Earth elements) modelling shows that the parent magma of SG and GG was derived from partial melting at different crustal levels. Rb–Sr isotope data of GG yield a mineral isochron age of 860 $\pm$ 7.4 Ma which represent the time of igneous crystallisation and cooling of the granite to less than 400$^{\circ }$C. Seasonal variability of sea-surface temperature fronts associated with large marine ecosystems in the north Indian Ocean Kankan Sarkar Aparna S G Shrikant Dora Shankar D We use 14 years of satellite-derived sea-surface temperature (SST) data to compute a monthly frontal probability index (FPI) to determine the existence of a front in a pixel. A persistent SST front is deemed to exist if the FPI in a narrow region exceeds that in the surrounding ocean. We describe the seasonal variability of 17 persistent SST fronts (eight associated with the shelf-slope boundary and five with the mixing between different water masses) in the north Indian Ocean. Only weak fronts exist during a few months in the strong upwelling regimes off Somalia and Oman. Wyrtki Jets: Role of intraseasonal forcing Prerna S Abhisek Chatterjee Mukherjee A Ravichandran M Shenoi S S C Direct current measurements observed from the acoustic Doppler current profilers in the equatorial Indian Ocean (EIO) and solutions from an ocean general circulation model are investigated to understand the dynamics of the Wyrtki jet. These jets are usually described as semiannual direct wind forced zonal currents along the central and eastern EIO. We show that both, spring and fall, Wyrtki jets show predominant semiannual spectral peaks, but significant intraseasonal energy is evident during spring in the central and eastern EIO. We find that for the semiannual band, there is a strong spectral coherence between the overlying winds and the currents in the central EIO, but no coherency is observed in the eastern part of the EIO. Moreover, for the intraseasonal band, strong coherency between the winds and currents is evident. During spring, intraseasonal currents induced by the Madden–Julian oscillation (MJO) superimpose constructively with semiannual currents and thus intensify the strength of the spring Wyrtki jet. Also, the atmospheric intraseasonal variability accounts for the interannual variabilities observed in spring Wyrtki jets. Fractal dimension analysis for seismicity spatial and temporal distribution in the circum-Pacific seismic belt Yin Lirong Li Xiaolu Zheng Wenfeng Yin Zhengtong Song Lihong Ge Lijun Zeng Qingchuan In this study, we present the fractal characteristics of the spatio-temporal sequence for seismic activity in the circum-Pacific seismic belt and vicinity regions, which is one of the most active seismic zones worldwide. We select the seismic dataset with magnitude $M \geq$ 4.4 in the circum-Pacific seismic belt region and its vicinity from 1900–2015 as the objects. Based on the methods of capacity dimension and information dimension, using ln(1/$\delta$)–ln N($\delta$) of the relationship to evaluate and explain, the results show that (1) in the circum-Pacific seismic belt and the surrounding areas, for the seismic activity with magnitude $M \geq$ 4.4, the time series dimension is 0.63, the spatial distribution dimension is 0.52 and they have fractal structure. (2) For the earthquakes with $M \geq$ 7.0, the time series dimension increases greatly, which indicates that the cluster characteristics in time is greatly reduced. And the earthquakes with magnitude 7.0 $\geq$ $M \geq$ 4.4 have significant impact on the characterized by clustering in time in the study region. (3) There is significant fractal structure at spatio-temporal distribution of earthquakes in the circum-Pacific seismic belt. It reveals the tectonic movements keep continuous, obvious anisotropism characteristic of geological structure and the distribution of surface stress field is spatio-temporal heterogeneity in the study area.
CommonCrawl
Health risks due to consumption of pesticides in ready-to-eat vegetables (salads) in Kumasi, Ghana Samuel Akomea-Frempong ORCID: orcid.org/0000-0001-5729-89851, Isaac W. Ofosu1, Emmanuel de-Graft Johnson Owusu-Ansah2 & Godfred Darko3 Pesticide residue levels were determined in ready-to-eat vegetables collected from 16 sites along the food chain; which is, farms, markets, cafeterias and street food vending sites in Kumasi, Ghana. The aim of the study was to determine the concentrations of pesticides residues in two ready-to-eat vegetables and assess the health risks due to consumption of these contaminated vegetables. Pesticide residues in ready-to-eat vegetables or salads were extracted by means of the QuEChERS method. Synthetic pyrethroid and organophosphorus pesticides residues in samples were determined using Gas Chromatography with Electron Capture Detector and Pulsed Flame Photometric Detector respectively. Consumption data of ready-to-eat vegetables were obtained from a questionnaire-based dietary survey in the study area. The hazard index and the relative potency factor (RPFs) approaches were used to assess the health risk from chronic cumulative dietary exposure to pesticides. There were six synthetic pyrethroid residues detected in the ready-to-eat samples at varying concentrations. They were bifenthrin, permethrin, cypermethrin, deltamethrin, lambda-cyhalothrin and fenvalerate. Also, two organophosphates were detected in the samples; which were chlorpyrifos and diazinon. Lambda-cyhalothrin residues was present in all the samples in the study, with the mean concentration of 4.5 × 10−2 mgkg−1. The mean concentration of diazinon in all the samples (0.040 mgkg−1) exceeded the EU MRLs (0.01 mgkg−1), chlopyrifos exceeded its MRL in one sample from the street food vending site and cafeteria each. Deltamethrin, Fenvalerate and Permethrin exceeded their respective MRL in samples from Asafo (Street food vending sites) and Adum (Cafeteria) and KNUST (Farm) respectively. However, the health index of all pesticides residues detected were below the permissible limit. The cumulative intake from the RPF approach for the pesticides were relatively lower than the ADI of the index chemicals. The concentrations of chlorpyrifos, deltamethrin, fenvalerate, diazinon and permethrin exceeded their respective EU MRLs in some ready-to-eat vegetable samples in the study. The health index and comparison of cumulative intake from RPF with ADI of the index chemicals suggest that pesticide residues in ready-to-eat vegetables could not be considered a major public health problem. Agriculture boosts the economy of the nation as most people depend on it for livelihood (Hossain et al. 2015). In order to meet the increasing demand for food as a result of the increasing population, extensive use of pesticides to kill pests that destroy crops is now popular in most developing countries to increase yield of crops (Akoto et al. 2015; Amoah et al. 2006; Chowdhury et al. 2012). Vegetables are the second major food group consumed after cereals and their products in West Africa (Stadlmayr et al. 2013). Moreover, increased vegetable production is important to Ghana's food security strategy (Akoto et al. 2015). Vegetables consumption promotes good health because of their nutritive components (Verma et al. 2015). But most of the vegetable crops such as chilli (hot) pepper, sweet pepper, tomato and okra are lost on the farm to pests' infestation (Degri and Zainab 2013). As a result, most farmers (over 80%) in Ghana use pesticides and also to protect the crops quality and reduce cost of production. Reports show that there is an unselective use of pesticides among Ghanaian farmers during vegetable cultivation (Ntow et al., 2006). Furthermore, farmers do not respect pre-harvest interval after using pesticide since consumers demand for vegetables have increased (Darko and Akoto 2008; Ntow et al. 2006). There have been several global attempt to reduce or even eliminate the use of pesticides. However, there is still evidence of their presence in various vegetables as a result of application of doses above the recommended dosage. This is due to the ignorance of most farmers to toxic effects of pesticide overdoses (Horna et al. 2008). Pesticide residues in vegetables have been found to be detrimental to human health particularly when they are freshly consumed (Baig et al. 2009; Chen et al. 2011; Solecki et al. 2005). Several studies suggest a possible negative correlation between pesticide residues and human and animal health (Berrada et al. 2010; Chowdhury et al. 2012). In animals, there have been adverse effects such as cancer when laboratory animals were exposed to organophosphates and synthetic pyrethroids (Akoto et al. 2015). In humans, diseases such as headaches and nausea are known to be acute symptoms to pesticide exposure (Ali and Tahir 2000; Chowdhury et al. 2012). However, cancer, reproductive defects (Bassil et al. 2007), developmental impairment, immunotoxicity (Berrada et al. 2010), birth defects and endocrine disruption are associated symptoms (Longnecker et al., 1997). According to WHO (2002) pesticide toxicity resulted in about 849,000 death of people globally in 2001 (Hossain et al. 2015). But most of them occurred in the developing countries. More studies need to be done on pesticides because of their widespread misapplication and long range atmospheric transport and deposition (Akoto et al. 2013; Darko and Acquah 2008). An increase rate of pesticide use by farmers in Ghana to maximize profitability (Darko 2009). Moreover in Ghana, studies show the presence of pesticides in cereals (Akoto et al. 2013), fish and water sediments (Darko et al. 2008), even in the atmosphere (Hogarh et al. 2014) and in vegetables (Akoto et al. 2015; Botwe et al. 2011; Darko and Akoto 2008). Most of these studies in Ghana, showed the presence of pesticides in vegetables at the market level (Akoto et al. 2015; Bempah et al. 2016). It is therefore, important to assess the health effects on consumers of vegetables (cabbage and lettuce) that are eaten raw in Ghana, especially along the food chain (farm, market, cafeterias and restaurants). This could be evaluated if the levels of pesticide residues in ready-to-eat vegetables (salads) are determined since data on them are scanty. The aims of this study were to determine the concentrations of the pesticide residues in ready-to-eat vegetables (salads) and to assess the health risk they pose to consumers. The ready-to-eat vegetables consisting of fresh cabbages (Brassica oleracea) and lettuces (Lactuca sativa) were sampled from 16 sites in Kumasi, Ghana. The sample sites consisted of 4 farms, 4 market centres, 4 street food vending points and 4 cafeterias. The glassware used for extraction of the pesticide residues in the vegetables were thoroughly cleaned with detergent and distilled water. BDH Chemical Ltd., UK, provided the reagents and solvents and they were all of analytical grade. The various pesticide standards were obtained from Sigma Aldrich (St. Louis, Missouri, USA). Eight hundred grams (800 g) of ready-to-eat lettuce and cabbage each per sampling site were taken from the farms and market centres (Table 1). Also, 800 grams of salad (mixture of cabbage and lettuce) was sampled from each street food vending sites and cafeteria (Table 1) over a 3-weeks period in October 2015. Table 1 Sampling areas of ready-to-eat vegetables Vegetables were stored in labeled bags and transported to the laboratory where they were kept refrigerated at 4 °C until analysis. A semi structured food frequency questionnaire (Additional file 1) was used in an interview schedule involving 406 respondents including farmers, vegetable sellers, consumers, food vendors and restaurateurs. Attached to the questionnaire, different quantities of ready-to-eat vegetables (5, 6.05, 10, 13, 15, 20, 25, 30 and 50 g) were added in zip bags. Respondents were asked to identify the quantities they consume per serving. This helped to determine the amount of vegetables eaten by the respondents; which was used in determining the consumption rates. Additional information on socio-demographic data and the body weight of the respondents were obtained from the questionnaire. Pesticide quantification Extraction and clean-up of pesticide residues Extraction and clean-up of pesticide residues was performed by means of the QuEChERS method of Payá et al. (2007) with slight modification. Ten grams of homogenous sample of vegetables (lettuce and cabbage) were weighed into 50 mL centrifuge tube. The mixture was vortexed for 1 min to obtain a uniform mixture after 10 mL of acetonitrile was added to the mixture. A mixture of salt (4 g of magnesium sulphate anhydrous, 1 g of sodium chloride, 1 g of trisodium citrate dehydrate and 0.5 g disodium hydrogencitrate sesquihydrate) was added to the acetonitrile-based mixture. The resultant mixture was vigorously macerated for 1 min and centrifuged at 3000 rpm for 5 min. Aliquots (6 mL) of the extract (organic phase) was transferred into centrifuge tube which contains 150 mg primary secondary amine and 900 mg magnesium sulphate and vortexed for 30 s followed by centrifugation for 5 min at 3000 rpm. A 4 mL aliquot of extract was transferred into a pear shaped flask and 40 μL of 5% formic acid solution in acetonitrile (v/v) was added to adjust the pH of the extracts to 5. The filtrate was concentrated to near dryness on a rotary evaporator (Rotary R-210). One milliliter of ethyl acetate and 20 μL of ethyl acetate (v/v) containing a percentage of glycol solution was used to re-dissolve the concentrated filtrate before transferring the extract into 2 mL vial for quantification by GC with ECD and PFPD. Determination of pesticides Separation and quantification of synthetic pyrethroid pesticides were carried out on Varian CP 3800 gas chromatograph with a CombiPAL autosampler equipped with an Electron Capture Detector (ECD, 63Ni), on 30 m + 10 mEZ Guard 0.25 mm internal diameter fused silica capillary column coated with VF-5 ms (0.25-μm film). The initial column oven temperature was 70 °C, held for 2 min and increased to 180 °C at a rate of 25 °C min−1, and then from 180 °C to 300 °C at a rate of 5 °C min−1. Purified nitrogen gas was used as carrier gas at the flow rate of 1.0 mL min−1 and as the make-up gas of 29 mL min−1. The injector and detector temperatures were maintained at 270 °C and 300 °C, respectively. The injection volume was 1.0 μL. Separation and quantification of organophosphate pesticide residues were carried out using Varian CP-3800 gas chromatograph with a CombiPAL autosampler equipped with Pulse Flame Photometric Detector on 30-mm by 0.25-mm internal diameter fused silica capillary column coated with VF-1701 ms (0.25-μm film). The column oven temperature was programmed as: initial temperature 70 °C, then increased to 200 °C at a rate of 25 °Cmin−1, and then increased to 250 °C at a rate of 20 °Cmin−1, keeping the final temperature for 2 min. The carrier gas was nitrogen gas at the flow rate of 2 mLmin−1. The injector and detector temperatures were maintained at 250 and 280 °C, respectively. The injector volume into the GC was 2.0 μL. In order to check interference from the samples, blank analyses were also performed. The retention times of the reference standards were used in detecting the pesticide residues in the extracts. Quantification was achieved by comparing sample peak areas with those of the external reference standards under the same conditions. The sample extracts and reagent blanks were fortified with mixed 0.01 mg/kg of pesticide standards to determine the recovery of the method. The results were calculated with the peak area compared to that of the calibration standards to determine the residue quantitatively. The limits of detection for this method mentioned above were 0.001 mgkg−1, which were found by determining the lowest concentrations of the residues in each of the matrices that could be reproducibly measured at the operating conditions of the GC using the quotient of 3.3 σ and the slope of the calibration curve. The precision of this method ranged from 0.01 mgkg−1 to 1.0 mgkg−1, which was determined by its fortified recoveries for all matrixes ranging from 70 to 120%, its relative standard deviation with range from 3.0 to 12.6% and the correlations were greater than 0.99. Hazard index To estimate the risk of non-carcinogenic effects, the estimated daily intakes (EDI) were divided by their corresponding acceptable daily intake (ADI) as shown in Eq. 1. If HI is greater than 1, is an indication of non-carcinogenic risk (Wang et al. 2011). $$ HI=\frac{EDI}{ADI} $$ ADI (mgkg−1 day−1) of bifenthrin, cypermethrin, deltamethrin, fenvalerate, lambda-cyhalothrin, permethrin, chlorpyrifos and diazinon (Hossain et al., 2015; Akoto et al., 2015) were obtained from various epidemiological studies and recorded in databases by various health departments. Whiles the average EDI of the various pesticides (mgkg−1d−1) were calculated using Eq. 2. $$ EDI=\frac{Cp\kern0.5em \times \kern0.5em CRp}{B_wc} $$ Where, CRp is the consumption rate (kgd −1 ) of pesticides via vegetables, Cp is the mean concentration of pesticide residues (mgkg −1 ) in vegetable samples and B W c is the average body weight (kg) of consumers. Researchers such as Saha and Zaman (2012) have revealed that there could be an interactive and/or additive effects upon the exposure of two or more pollutants. This could lead to a combined health index. Because of this, the US EPA (2000) proposed a hazard index method; which has been used to estimate risk posed by a group of pesticides that act by a common mechanism or that are toxicologically similar (Reffstrup et al. 2010). According to Reffstrup et al. (2010) combined hazard index (CHI) is determined using Eq. 3. $$ CHI=\frac{E_1}{A_1}+\frac{E_2}{A_2}+\dots +\frac{E_i}{A_i}=\sum_{i=1}^n\frac{E_i}{A_i} $$ Where, E 1 , E 2 , E n , and E i are the estimated daily intakes of each individual pesticide (i) in a mixture of n pesticides in the food sample. Whereas A 1 , A 2 , A n , and A i are the acceptable daily intakes (ADIs) for each pesticide (US EPA 2000). If the hazard index exceeds 1, it means the overall mixture of pesticide residue has exceeded the maximum acceptable limit and might pose risk to consumers. Relative potency factors The study applied relative potency factor (RPF) approach as described in Quijano et al. (2016) to estimate the chronic cumulative exposure to pesticides in the study area. This method used methamidophos and deltamethrin as index compounds. An index compound is a compound with an extensive toxicological database and one of the best studied within the group. Other studies, (Boon et al. 2008; Jensen et al. 2009), used this method to estimate total OP and carbamates exposure in diets. The averages of pesticide concentrations, consumption rate, and body weights were used in the study to calculate the consumer risk of pesticides via ready-to-eat vegetables. The averages were used to typify a more practical and useful scenario in real world sample analysis. Consumption rate of individual pesticide via ready-to-eat vegetables was calculated from the results of the survey and the laboratory reports of pesticide residues. The individual estimated daily intakes (iEDI) of pesticide contaminated ready-to-eat vegetables was calculated using Eq. 4. $$ \mathrm{iEDI}\left(\mathrm{mg}/\mathrm{meankgbodyweight}/\mathrm{day}\right)={C}_r\times {F}_r $$ Where, C r is concentrations of pesticide residues in ready-to-eat vegetables and F r is the consumption rate of ready-to-eat vegetables. Methamidophos and deltamethrin were selected as index compounds for OP (USEPA 2006) and SP (USEPA 2011) respectively in the RPF approach. RPF for OP were derived from literature and considered the benchmark dose at 10% Acetylcholinesterase (AChE) inhibition (BMD10) in the brain of female rats (Jensen et al. 2009; USEPA 2006). Fortunately, the 2 OP were found within the 24 OP compounds that used methamidophos as index compounds that were published by the EPA. For pyrethroids, out of the 15 SP in the common assessment group (CAG) published by US EPA, six were discussed in this study. These published RPFs of SP were determined by dividing the chemical specific BMD20 by the BMD20 of the index chemical deltamethrin (USEPA 2011) as shown in Additional file 2. Chronic cumulative exposure was expressed as methamidophos- and deltamethrin- equivalents by multiplying the iEDI of each mean pesticide value by its adjusted RPF and adding up the different equivalents to one cumulative intake. Where adjusted RPF is a product of the RPF and a safety factor (Food Quality Protection Act "FQPA" in 1996) incorporated in many RPF (U.S. Environmental Protection Agency 2002). This is protective of human health for the pesticide cumulative risk assessment since there is not sufficient information to refine the interspecies factor. To assess whether there was a risk, the mean of the cumulative intake of OP and SP were compared to the ADI of the corresponding index compound. Pesticide residues The concentration of pesticide residues in the samples are shown in Table 2. Table 2 Concentration of pesticide residues in ready-to-eat vegetables (mgkg−1) The study shows that all samples analyzed were contaminated with either one or more pesticides. Only 2 organophosphorus pesticides residues were detected out of the 13 considered in the samples (Table 2). Furthermore, 6 out of 9 synthetic pyrethroid pesticides were detected in the samples (Table 2). Among the detected pesticide residues in the samples, only lambda–cyhalothrin having varying concentrations ranging from 0.004 to 0.130 mgkg−1 were detected in all the 16 samples. Lambda-cyhalothrin is used to control pests such as a cabbage worms, lettuce flea beetles, leaf miners and aphids. Nine out of the 16 samples had concentrations ranging from 0.034 to 0.090 mgkg−1 of chlorpyrifos in them, with concentrations in two samples (RVS 12 = 0.089 and RVS 13 = 0.090 mgkg−1) exceeding its maximum residue limit of 0.05 mgkg−1 (EU 2013). Moreover, the concentration of diazinon ranging from 0.066 to 0.184 mgkg−1 exceeded their maximum residue limit of 0.01 mgkg−1 (EU 2013). Diazinon residues were identified in 4 of the 16 samples. However, varying concentrations of synthetic pyrethroid pesticides (ranging from 0.007 to 0.320 mgkg−1 for bifenthrin, 0.002 to 0.070 mgkg−1 for cypermethrin and 0.001 to 0.021 mgkg−1 for deltamethrin) detected in the samples were within their corresponding maximum residuelimit. Fenvalerate and permethrin detected in samples RVS 11 and RVS 1 had their concentrations above their maximum residue limits respectively. Seven samples had at least a pesticide concentration which was greater than their respective maximum residuelimits. Residues of cypermethrin were detected in 2 of the 16 samples at a mean concentration of 0.005 mgkg−1 which was lower than concentration of cypermethrin in cabbage and lettuce as 0.071 mgkg−1 and 0.079 mgkg−1 respectively in a study conducted by Yuan et al. (2014). Their study also recorded a higher mean concentration of chlorpyrifos in cabbage and lettuce as 0.09 mgkg−1 and 0.024 mgkg−1 respectively. Two samples RVS 1 (0.460 mgkg−1) for permethrin and RVS 3 (0.179 mgkg−1) for diazinon from farm sites in KNUST and Chirapatre exceeded their respective MRL. The other samples (RVS 12 and 13 for chlorpyrifos, RVS 15 for deltamethrin, RVS 11 for fenvalerate and RVS 10, 11, and 15 for diazinon) that exceeded their MRL were either from cafeteria or street food vending sites. This differs from expectation as samples from farm sites were expected to have higher pesticide residue levels since pesticides application occurs there. According to Pérez et al. (2016) processes such as blanching, cooking, frying, peeling and washing reduces pesticide residue levels in fruits and vegetables. Pesticide residues that are loosely attached to the surface were observed to be reduced by washing while peeling removes even those that have penetrated the cuticles of the fruits or vegetables. Contrary, washing of vegetables could increase the pesticide residues sampled from cafeterias and street food vending sites in this study. This is because earlier research revealed that people along the vegetable chain wash several vegetables in the same water in a container without changing these waters; this could re-contaminate these vegetables (Amoah et al. 2008). Moreover, most of these pesticides such as lambda-cyhalothrin are not made only as agricultural insecticides for food and non-food crops but they are used indoors and outdoors for homes, hospitals, and other buildings (World Health Organization 1990). In general, lambda–cyhalothrin was prevalent in the all the samples analyzed, followed by fenvalerate, chlorpyrifos, bifenthrin, deltamethrin, diazinon, permethrin and cypermethrin. However, in terms of concentration in samples, the highest was diazinon followed by lambda-cyhalothrin, chlorpyrifos, bifenthrin, permethrin and fenvalerate; indicating higher concentration of OP than SP in samples. Permethrin residues were identified in 3 of the 16 samples with a mean concentration of 0.031 mgkg−1 whiles fenvalerate was detected in 12 of the samples at a mean concentration of 0.007 mgkg−1. Bifenthrin was detected in 6 of the samples at a mean concentration of 0.034 mgkg−1. The range of concentrations of bifenthrin reported in the current study were higher than the study of González-Rodríguez et al. (2008), who reported a range of bifenthrin concentration in lettuce as 0.02 to 0.05 mgkg−1 in Spain. For the synthetic pyrethroid, lambda-cyhalothrin recorded the highest concentration whereas the lowest concentration was deltamethrin. Moreover, diazinon recorded the highest concentration with chlorpyrifos being the lowest for the organophosphates. All the pesticides, except diazinon, recorded mean concentrations lower than their maximum residuelimits. The results indicate the use of pesticides by farmers and their concentrations suggesting farmers' practices towards the safe use of pesticides.. Besides, pesticide application and pest infestation would influence the amount of residues found in them. The coexistence of many of these vegetables grown together on the same farms will create the right conditions for different pests attacks; which will lead to the application of several pesticides. Consumption trends of ready-to-eat vegetables In total, 404 stakeholders of different groups among the ready-to-eat vegetables farm to fork chain responded to the study making a response rate of 99.5%. Non-participation was because of interview refusals (0.5%). From the interview schedule, majority of the people (56%) who patronize ready-to eat vegetables in the study area were within 16 to 30 years' age group and 42% above 30 years. The consumption frequencies revealed that females (51.23%) patronized ready-to-eat vegetables more than males (48.77%). A similar survey conducted in the study area (Fung et al. 2011) was different as males patronize more ready-to-eat vegetables. This might be as a result of the variation in respondents. Respondents in this study included all the various people along the vegetable food chain (farmers, vegetable sellers, food vendors, buyers (consumers) and restauranteurs). Whereas only buyers (consumers) of salads at the cafeterias and street vending sites were interviewed in the study conducted by Fung et al. (2011). However, a non-parametric Levene's test showed a homogeneity of variance (p > 0.05) between the gender; indicating no significant statistical difference between males and females that consume ready-to-eat vegetables in Kumasi (detailed in Additional file 2). Consumers of ready-to-eat vegetables in this study consumes averagely 18.60 g per meal, and 355 consumers (87.44%) claim they consume these salads once a day if they do eat them. Twelve respondents (2.96%) consumed ready-to-eat vegetables thrice a day while 9.60% consumed twice a day. Moreover, the study revealed that over a quarter (29.80%) of the consumers consume ready-to-eat-vegetables once a week while 67 (16.50%) twice a week. Fifty-three respondents (13.05%) consume ready-to-eat vegetables once every 2 weeks, 48 (11.83%) once every month and 11 (2.71%) every day. The mean consumption rate per day was recorded as 23.5 gd−1, while the average body weight of consumers in the study area was 68.13 kg. Based on these concentrations and the consumption trends of salads, the average estimated daily intake were determined and subsequently the health risk assessment. Several methods such as the point of departure index (PODI), and the margin of exposure (MOE) could be used to evaluate the risk; however, the study used hazard index (HI) and the RPF approach, although they all have their limitations. The RPF approach, for example, can only be used if the effects of the individual substances are dose-additive (Boon et al. 2008). Hazard indices (HI) for all residues detected are shown in Table 3. Lambda-cyhalothrin was recorded the highest with respect to hazard indices obtained, and the lowest was recorded for diazinon, chlorpyrifos, bifenthrin, permethrin, fenvalerate, deltamethrin and cypermethrin followed in the decreasing order. Again, the order of the HI's were the same in terms of the pesticide residue concentrations detected in the samples except diazinon which was the highest followed by lambda-cyhalothrin. This suggest that HI are almost directly proportional to the pesticide residue concentrations detected in the samples. Whereas for prevalence of pesticide residues, the HI's were mixed-up and followed no particular order. The HI of permethrin, diazinon, chlorpyrifos, fenvalerate and deltamethrin (Table 3) were less than 1 and therefore indicate no health risk. However, these pesticides could accumulate in fatty tissues of consumers and exert chronic health effect since some individual sample concentrations (RVS 1 for permethrin, RVS 3, 10, 11, and 15 for diazinon, RVS 12 and 13 for chlorpyrifos, RVS 11 for fenvalerate and RVS 15 for deltamethrin) exceeded their respective MRL. Table 3 The mean concentration of pesticides, their EU MRL, EDI, ADI and health risk estimation through the consumption of ready-to-eat vegetables or salad samples (n = 16) The HI of deltamethrin in eggplant (3.1 × 10−3) and tomato (2.8 × 10−5) as well as permethrin in eggplant (4.7 × 10−5) and tomato (4.1 × 10−3) in the same study area (Akoto et al., 2015) were similar to that determined in this study. In addition, chlorpyrifos in eggplant (7.7 × 10−3), okro (1.2 × 10−3) and tomatoes (4.1 × 10−3) were similar to HI of chlorpyrifos in this study. Indicating a moderate approach in pesticide application by vegetable farmers, since the HI were similar and below permissible limit. The hazard indices of all pesticide residues were below one, hence posed no health risk to consumers. The combined health risks for organophosphates was 2.45 × 10−3 (Table 4). This indicates that consumption of vegetables in the study area pose no significant health risk to consumers as far as organophosphates are concerned. Likewise, synthetic pyrethroids recorded a combined health risk effect of 2.57 × 10−3, suggesting that people who patronize vegetables may not experience adverse health impacts in their lifetime. The synthetic pyrethroids recorded the highest combined health risk to consumers relatively to organophosphate. Table 4 The combined health risk of various pesticides in ready-to-eat vegetables or salads The total non-carcinogenic effects from the consumption of these ready-to eat vegetables is the sum of the combined health risk of organophosphates and synthetic pyrethroids detected in the samples. However, the total non-carcinogenic effects was 5.02 × 10−3, which is shown in Table 4. Long-term exposure of ready-to-eat vegetable or salad consumers to pesticide residues in Kumasi were low for OP and SP. The pesticides that most contributed to the mean chronic exposure were: chlorpyrifos for OP and lambda-cyhalothrin for SP. Chlorpyrifos contributed most to chronic exposure for OP when the RPF approach was used unlike diazinon when HI was employed. Also, SP contributed most (96.25%) to the chronic cumulative intake/exposure of pesticides detected in the samples. To assess whether there was a risk of exposure, the chronic cumulative intake per each pesticide group, was compared with its corresponding ADI. OP and SP chronic cumulative intakes did not exceed the ADI values of the corresponding index compounds (Table 5). According to the results, cumulative exposure to the 22 pesticides included in the study is not so much a problem to people consuming ready-to-eat vegetables that contain pesticide residues of AChE inhibiting compounds with the same mode of action. The study present low values of chronic cumulative exposure relatively to the chronic cumulative exposure in fruits and vegetables in Valencia, Spain (Quijano et al. 2016). The low chronic cumulative exposure values may be to the pesticide residues in only one commodity (ready-to-eat vegetables or salads) as compared to the 19 commodities comprising of fruits and vegetables in their study. The current study used methamidophos as index compound for OP like the Danish study of pesticides in total diets unlike acephate in fruits and vegetables in Valencia (Quijano et al. 2016). The present study used the mean values conversely to the lower and upper bound in the Valencia study and the 50, 90, and 99 percentile in the Dutch study (Boon et al. 2008). The use of different methodologies can lead to differences in the estimated exposures and evidence the need to harmonize the methodologies used in risk assessment. The results indicated that the exposure is higher when HI are used than when BMD-derived RPF are used. Although they were all relatively, lower as compared to their respective ADI. Table 5 Chronic cumulative exposure to OP and SP through ready-to-eat vegetables in Kumasi, Ghana It is always important to include scientific uncertainties in pesticide risk assessment since they influence the results. One of the uncertainties that was adhere to was the survey period, which was 3 weeks in this study. According to EFSA (2012) more than 2 days are recommended to estimate long–term exposure. However, the study did not include uncertainty factor such as processing factors (washing and peeling) that can influence the calculated exposure; resulting into a relatively lower estimated exposure. Incorporating these uncertainties into pesticide risk assessment would not have influence the results much considering that exposure levels were very low. Additionally, pesticide risk assessments are more appropriate in total diet exposure, nevertheless, the study focused on intakes of pesticide residues via ready-to-eat vegetables or salads. The study shows that pesticide residues are present in all ready-to-eat vegetable or salad samples analyzed. At least one synthetic pyrethroid pesticide was detected in each sample analyzed whiles organophosphate pesticides were found in more than half of the samples. Fenvalerate and permethrin exceeded their maximum residue limits in one sample each. Likewise, chlorpyrifos exceeded its maximum residue limit in two samples and diazinon in the four samples. The cumulative nature of pesticides in humans makes their presence in ready-to-eat vegetables or salads problematic. Although chlorpyrifos, diazinon fenvalerate and permethrin exceeded their maximum residuelimits, they pose no adverse health effect to consumers per the health risk assessments. The combined health index of the various pesticide groups revealed no significant health risk for dietary ingestion of organophosphates and synthetic pyrethroid pesticide residues in ready-to-eat vegetables or salads. Therefore, no long-term consumer risk is expected. To guarantee food safety, continuous monitoring is recommended for pesticide residues in vegetables especially those eaten raw. Akoto O, Andoh A, Darko G, Eshun K, Osei-Fosu P. Health risk assessment of pesticides residue in maize and cowpea from Ejura, Ghana. Chemosphere. 2013;92:67–73. Akoto O, Gavor S, Appah MK, Apau J. Estimation of human health risk associated with the consumption of pesticide-contaminated vegetables from Kumasi, Ghana. Environ Monit Assess. 2015;187(5):244. Ali RB, Tahir A. Preliminary survey for pesticide poisoning in Pakistan. Pakistan J Biol Sci. 2000;3:1976–7. Amoah P, Dreschel P, Abaidoo RC, Ntow WJ. Pesticide and pathogen contamination of vegetables in Ghana's urban markets. Arch Environ Contam Toxicol. 2006;50:1–6. Amoah P, Lente I, Asem-Hiabile S, Abaidoo RC. Quality of Vegetables in Ghanaian Urban Farms and Markets. In: Irrigated urban vegetables production in Ghana: Characteristics, benefits and risk mitigation. (Eds.) Drechsel, P, Keraita, B. Colombo: IWMI; 2nd Edition. 2008. p. 89–103. Baig SA, Akhtera AN, Ashfaq M, Asi RM. Determination of the organophosphorus pesticide in vegetables by high performance liquid chromatography. Am Eurasian J Agric Environ Sci. 2009;6(5):513–9. Bassil KL, Vakil C, Sanborn M, Cole DC, Kaur JS, Kerr KJ. Cancer health effects of pesticides: systematic review. Cancer Fam Physician. 2007;53(10):1704–11. Bempah CK, Agyekum AA, Akuamoa F, Frimpong S, Buah-Kwofie A. Dietary exposure to chlorinated pesticide residues in fruits and vegetables from Ghanaian markets. J Food Compos Anal. 2016;46(July 2016):103–13. Berrada H, Fernández M, Ruiz MJ, Moltó JC, Mañes J, Font G. Surveillance of pesticide residues in fruits from Valencia during twenty months (2004/05). Food Control. 2010;21:36–44. Boon PE, Van Der Voet H, Van Raaij MTM, Van Klaveren JD. Cumulative risk assessment of the exposure to organophosphorus and carbamate insecticides in the Dutch diet. Food Chem Toxicol. 2008;46:3090–8. Botwe B, Ntow W, Kelderman P, Drechsel P, Carboo PD, Nartey PVK, et al. Pesticide residues contamination of vegetables and their public health implications in Ghana. J. Environ. Issues Agric. Dev. Ctries. 2011;3(2):10–8. Available at: https://www.icidr.org/jeiadc_vol3no2/Pesticide%20Residues%20Contamination%20of%20Vegetables%20and%20their%20Public%20Health%20Implications%20in%20Ghana.pdf. Chen C, Qian Y, Chen Q, Tao C, Li C, Li Y. Evaluation of pesticide residues in fruits and vegetables from Xiamen, China. Food Control. 2011;22:1114–20. Chowdhury MAZ, Banik S, Uddin B, Moniruzzaman M, Karim N, Gan SH. Organophosphorus and carbamate pesticide residues detected in water samples collected from paddy and vegetable fields of the Savar and Dhamrai upazila in Bangladesh. Int J Environ Res Public Health. 2012;9:3318–29. Darko G. Dietary intake of organophosphorus pesticide residues through vegetables from Kumasi, Ghana. Pestic Steward Alliance, 9th Annu Conf Albuquerque. 2009:2–29. Darko G, Acquah SO. Levels of organochlorine pesticide residues in dairy products in Kumasi, Ghana. Chemosphere. 2008;71(2):294–8. Darko G, Akoto O. Dietary intake of organophosphorus pesticide residues through vegetables from Kumasi, Ghana. Food Chem Toxicol. 2008;46:3703–6. Darko G, Akoto O, Oppong C. Persistent organochlorine pesticide residues in fish, sediments and water from Lake Bosomtwi, Ghana. Chemosphere. 2008;72:21-24. http://dx.doi.org/10.1016/j.chemosphere.2008.02.052. Degri MM, Zainab JA. A study of insect pest infestations on stored fruits and vegetables in the north eastern Nigeria. Int J Sci Nat. 2013;4(4):646–50. EFSA. Guidance on the Use of Probabilistic Methodology for Modelling Dietary Exposure to Pesticide Residues. 2012;10(10):2839. doi:10.2903/j.efsa.2012.2839. Retrieved from: http://www.efsa.europa.eu/en/efsajournal/pub/2839.htm. Accessed 2 Apr 2017. EU. Pesticide database, pesticide residues MRLs. Plants, Eur. Com. 2013. Available from: https://ec.europa.eu/food/plant/pesticides/max_residue_levels/eu_rules/mrls_2013_en. Accessed 5 June 2016. FQPA (Food Quality Protection Act) (1996). Incorporation of Uncertainty Factors/Extrapolation Factors and the Target Margin of Exposure. In: USEPA. Pyrethrins/pyrethroid Cumulative Risk Assessment. www.epa.gov/pesticides/cumulative. 2011. p. 35-36. (Accessed 2 April 2017). Fung J, Keraita B, Konradsen F, Moe C, Akple M. Microbiological quality of urban-vended salad and its association with gastrointestinal diseases in Kumasi, Ghana. Internatinal J Food Safety, Nutr Public Heal. 2011;4343(24):152–66. González-Rodríguez RM, Rial-Otero R, Cancho-Grande B, Simal-Gándara J. Occurrence of fungicide and insecticide residues in trade samples of leafy vegetables. Food Chem. 2008;107(3):1342–7. Hogarh JN, Seike N, Kobara Y, Ofosu-Budu GK, Carboo D, Masunaga S. Atmospheric burden of organochlorine pesticides in Ghana. Chemosphere. 2014;102:1-5. doi: 10.1016/j.chemosphere.2013.10.019. Horna D, Smale M, Al Hassan R, Falck Zepeda J, Timpo SE. Insecticide use on vegetables in Ghana. International food policy research institute IFPRI Discussion paper 2008. 00785 36 pp. Hossain S, Fakhruddin ANM, Chowdhury MAZ, Rahman MA. Health risk assessment of selected pesticide residues in locally produced vegetables of Bangladesh. Int Food Reasearch J. 2015;22(1):110–5. Jensen B, Petersen A, Christensen T. Probabilistic assessment of the cumulative dietary acute exposure of the population of Denmark to organophosphorus and carbamate pesticides. Food Addit Contam Part A. 2009;26(7):1038–48. Longnecker MP, Rogan WJ, Lucier G. The human health effects of DDT (dichlorodiphenyltrichloroethane) and PCBs (polychlorinated biphenyls) and an overview of organochlorines in public health. Annu Rev Public Health. 1997;18:211–44. Ntow WJ, Gijzen HJ, Kelderman P, Drechsel P. Farmer perceptions and pesticide use practices in vegetable production in Ghana. Pest Manag Sci. 2006;62:356–65. Payá P, Anastassiades M, Mack D, Sigalova I, Tasdelen B, Oliva J, et al. Analysis of pesticide residues using the quick easy cheap effective rugged and safe (QuEChERS) pesticide multiresidue method in combination with gas and liquid chromatography and tandem mass spectrometric detection. Anal Bioanal Chem. 2007;389:1697–714. Pérez JJ, Ortiz R, Ramírez ML, Olivares J, Ruíz D, Montiel D. Presence of organochlorine pesticides in xoconostle (Opuntia joconostle) in the central region of Mexico. Int. J. Food Contam. 2016;3(1):21. Available at: https://link.springer.com/article/10.1186/s40550-016-0044-4. Quijano L, Yusà V, Font G, Pardo O. Chronic cumulative risk assessment of the exposure to organophosphorus, carbamate and pyrethroid and pyrethrin pesticides through fruit and vegetables consumption in the region of Valencia (Spain). Food Chem Toxicol. 2016;89(January):39–46. Reffstrup TK, Larsen JC, Meyer O. Risk assessment of mixtures of pesticides. Current approaches and future strategies. Regul Toxicol Pharmacol. 2010;56:174–92. Saha N, Zaman MR. Evaluation of possible health risks of heavy metals by consumption of foodstuffs available in the central market of Rajshahi City, Bangladesh. Environ Monit Assessment. 2012;185(5):3867–78. Solecki R, Davies L, Dellarco V, Dewhurst I, Raaij MV, Tritscher A. Guidance on setting of acute reference dose (ARfD) for pesticides. Food Chem. Toxicology. 2005;43:1569–93. Stadlmayr B, Charrondière UR, Burlingame B. Development of a regional food composition table for West Africa. Food Chem. 2013;140(3):443–6. Available from: http://dx.doi.org/10.1016/j.foodchem.2012.09.107. U.S. Environmental Protection Agency. Draft Document. "Consideration of the FQPA Safety Factor and Other Uncertainty Factors in Cumulative Risk Assessment of Chemicals Sharing a Common Mechanism of Toxicity;" February 28, 2002. Office of Pesticide Programs, Office of Prevention, Pesticides, 2002;67(40):9273. US EPA. Supplementary guidance for conducting health risk assessment of chemical mixtures. US Environmental Protection Agency. Risk Assess. Forum Tech. Panel. Off. EPA/630/R-00/002. 2000. USEPA. Organophosphorus Cumulative Risk Assessment 2006. Available from: https://www.google.com.gh/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&cad=rja&uact=8&ved=0ahUKEwizrojS1LHVAhUIDMAKHXUkDgcQFgg1MAI&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Bjsessionid. 2006. p. 51–68. Accessed 2 Apr 2017. USEPA. Pyrethrins/pyrethroid Cumulative Risk Assessment. www.epa.gov/pesticides/cumulative. 2011. p. 31–6. (Accessed: 2 April 2017). Verma P, Agrawal M, Sagar R. Assessment of potential health risks due to heavy metals through vegetable consumption in a tropical area irrigated by treated wastewater. Environ Syst Decis. 2015;35(3):375–88. Wang HS, Sthiannopkao S, Du J, Chen ZJ, Kim KW, Yasin MSM, et al. Daily intake and human risk assessment of organochlorine pesticides (OCPs) based on Cambodian market basket data. J Hazard Mater. 2011;192:1441–9. WHO. The world health report 2002. Geneva: Reducing risks, promoting healthy life, WHO; 2002. World Health Organization. Cyhalothrin. Environmental Health Criteria.International Programme on Chemical Safety 1990. p. 99; Geneva. Available at: http://www.inchem.org/documents/ehc/ehc/ehc99.htm. Yuan Y, Chen C, Zheng C, Wang X, Yang G, Wang Q, et al. Residue of chlorpyrifos and cypermethrin in vegetables and probabilistic exposure assessment for consumers in Zhejiang Province, China. Food Control. 2014;36(1):63–8. The authors wish to thank DANIDA for the financial support through the project: Safe Water for Food (SaWaFo), KNUST. They provided financial support for the field experiment – interviewing consumers and for the laboratory analysis of pesticide residues in ready-to-eat vegetables (salad). I have submitted my raw data (Excel form) as part of Additional file 1 for any references. Department of Food Science and Technology, College of Science, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana Samuel Akomea-Frempong & Isaac W. Ofosu Department of Mathematics, College of Science, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana Emmanuel de-Graft Johnson Owusu-Ansah Department of Chemistry, College of Science, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana Godfred Darko Search for Samuel Akomea-Frempong in: Search for Isaac W. Ofosu in: Search for Emmanuel de-Graft Johnson Owusu-Ansah in: Search for Godfred Darko in: SAF and IWO designed the study. SAF collected survey and analytical data, and drafted manuscript. IWO, EDJO and GD made substantial contributions to analysis and interpretation of data. GD reviewed manuscript for intellectual content and contributed substantially to the writing of manuscript. All authors read and approved the final manuscript. Correspondence to Samuel Akomea-Frempong. Table 6 Pesticide concentrations in the different ready-to-vegetables (mgkg−1) presented as the mean ± standard deviation (n = 16) with the range in parenthesis Food Frequency Questionnaire. (DOCX 17 kb) Relative potency factors (RPF) used in the cumulative assessment. (DOCX 19 kb) Akomea-Frempong, S., Ofosu, I.W., Owusu-Ansah, E.d.J. et al. Health risks due to consumption of pesticides in ready-to-eat vegetables (salads) in Kumasi, Ghana. FoodContamination 4, 13 (2017) doi:10.1186/s40550-017-0058-6 Organophosphate Synthetic pyrethroids
CommonCrawl
Factorization method for imaging a local perturbation in inhomogeneous periodic layers from far field measurements IPI Home Electrocommunication for weakly electric fish February 2020, 14(1): 117-132. doi: 10.3934/ipi.2019066 Nonlocal TV-Gaussian prior for Bayesian inverse problems with applications to limited CT reconstruction Didi Lv 1, , Qingping Zhou 1, , Jae Kyu Choi 2, , Jinglai Li 3, and Xiaoqun Zhang 4,, School of Mathematical Sciences and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai 200240, China School of Mathematical Sciences, Tongji University, Shanghai 200082, China Department of Mathematical Sciences, University of Liverpool, Liverpool L69 6ZL, United Kingdom School of Mathematical Sciences, MOE-LSC, and Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai 200240, China * Corresponding author: Xiaoqun Zhang Received April 2019 Revised August 2019 Published November 2019 Fund Project: The work is supported by NSFC grants (No. 11771288, 11301337, 91630311) and National key research and development program No. 2017YFB0202902. Figure(5) / Table(2) Bayesian inference methods have been widely applied in inverse problems due to the ability of uncertainty characterization of the estimation. The prior distribution of the unknown plays an essential role in the Bayesian inference, and a good prior distribution can significantly improve the inference results. In this paper, we propose a hybrid prior distribution on combining the nonlocal total variation regularization (NLTV) and the Gaussian distribution, namely NLTG prior. The advantage of this hybrid prior is two-fold. The proposed prior models both texture and geometric structures present in images through the NLTV. The Gaussian reference measure also provides a flexibility of incorporating structure information from a reference image. Some theoretical properties are established for the hybrid prior. We apply the proposed prior to limited tomography reconstruction problem that is difficult due to severe data missing. Both maximum a posteriori and conditional mean estimates are computed through two efficient methods and the numerical experiments validate the advantages and feasibility of the proposed NLTG prior. Keywords: Bayesian inverse problems, limited tomography, nonlocal total variation, Gaussian measure, uncertainty quantification. Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C35. Citation: Didi Lv, Qingping Zhou, Jae Kyu Choi, Jinglai Li, Xiaoqun Zhang. Nonlocal TV-Gaussian prior for Bayesian inverse problems with applications to limited CT reconstruction. Inverse Problems & Imaging, 2020, 14 (1) : 117-132. doi: 10.3934/ipi.2019066 R. N. Bracewell and A. C. Riddle, Inversion of fan-beam scans in radio astronomy, Astrophysical Journal, 150 (1967). doi: 10.1086/149346. Google Scholar K. Bredies, K. Kunisch and T. Pock, Total generalized variation, SIAM J. Imaging Sci., 3 (2010), 492-526. doi: 10.1137/090769521. Google Scholar A. Buades, B. Coll and J. M. Morel, A review of image denoising algorithms, with a new one, Multiscale Model. Simul., 4 (2005), 490-530. doi: 10.1137/040616024. Google Scholar C. Chang and C. Lin, LIBSVM: A library for support vector machines, ACM Transac. Intelligent Systems Technology (TIST), 2 (2011). doi: 10.1145/1961189.1961199. Google Scholar K. Choi, J. Wang, L. Zhu, T. S. Suh, S. Boyd and L. Xing, Compressed sensing based cone-beam computed tomography reconstruction with a first-order method, Med. Phys., 37 (2010), 5113-5125. doi: 10.1118/1.3481510. Google Scholar F. R. K. Chung, Spectral Graph Theory, CBMS Regional Conference Series in Mathematics, 92, American Mathematical Society, Providence, RI, 1997. doi: 10.1090/cbms/092. Google Scholar S. L. Cotter, G. O. Roberts, A. M. Stuart and D. White, MCMC methods for functions: Modifying old algorithms to make them faster, Statist. Sci., 28 (2013), 424-446. doi: 10.1214/13-STS421. Google Scholar M. Dashti, K. J. H. Law, A. M. Stuart and J. Voss, MAP estimators and their consistency in Bayesian nonparametric inverse problems, Inverse Problems, 29 (2013), 27pp. doi: 10.1088/0266-5611/29/9/095017. Google Scholar A. A. Efros and T. K. Leung, Texture synthesis by non-parametric sampling, IEEE International Conference on Computer Vision, Greece, 1999, 1033-1038. Google Scholar A. Elmoataz, O. Lezoray and S. Bougleux, Nonlocal discrete regularization on weighted graphs: A framework for image and manifold processing, IEEE Trans. Image Process., 17 (2008), 1047-1060. doi: 10.1109/TIP.2008.924284. Google Scholar G. B. Folland, Real Analysis: Modern Techniques and Their Applications, Pure and Applied Mathematics, John Wiley & Sons, Inc., New York, 1999. Google Scholar A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari and D. B. Rubin, Bayesian Data Analysis, Texts in Statistical Science Series, CRC Press, Boca Raton, FL, 2014. Google Scholar G. Gilboa and S. Osher, Nonlocal linear image regularization and supervised segmentation, Multiscale Model. Simul., 6 (2007), 595-630. doi: 10.1137/060669358. Google Scholar G. Gilboa and S. Osher, Nonlocal operators with applications to image processing, Multiscale Model. Simul., 7 (2008), 1005-1028. doi: 10.1137/070698592. Google Scholar T. Goldstein and S. Osher, The split Bregman method for L1-regularized problems, SIAM J. Imaging Sci., 2 (2009), 323-343. doi: 10.1137/080725891. Google Scholar T. Heuẞer, M. Brehm, S. Marcus, S. Sawall and M. Kachelrieẞ, CT data completion based on prior scans, IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC), Anaheim, CA, 2012, 2969-2976. doi: 10.1109/NSSMIC.2012.6551679. Google Scholar J. Kaipio and E. Somersalo, Statistical and Computational Inverse Problems, Applied Mathematical Sciences, 160, Springer-Verlag, New York, 2005. doi: 10.1007/b138659. Google Scholar H. Kim, J. Chen, A. Wang, C. Chuang, M. Held and J. Pouliot, Non-local total-variation NLTV minimization combined with reweighted L1-norm for compressed sensing CT reconstruction, Phys. Med. Biol., 61 (2016). doi: 10.1088/0031-9155/61/18/6878. Google Scholar S. Kindermann, S. Osher and P. W. Jones, Deblurring and denoising of images by nonlocal functionals, Multiscale Model. Simul., 4 (2005), 1091-1115. doi: 10.1137/050622249. Google Scholar E. Klann, A Mumford-Shah-like method for limited data tomography with an application to electron tomography, SIAM J. Imaging Sci., 4 (2011), 1029-1048. doi: 10.1137/100817371. Google Scholar M. Lassas and S. Siltanen, Can one use total variation prior for edge-preserving Bayesian inversion?, Inverse Problems, 20 (2004), 1537-1563. doi: 10.1088/0266-5611/20/5/013. Google Scholar J. Li, A note on the Karhunen-Loève expansions for infinite-dimensional Bayesian inverse problems, Statist. Probab. Lett., 106 (2015), 1-4. doi: 10.1016/j.spl.2015.06.025. Google Scholar J. Liu, H. Ding, S. Molloi, X. Zhang and H. Gao, Nonlocal total variation based spectral CT image reconstruction, Med. Phys., 42 (2015), 3570-3570. Google Scholar J. Liu, H. Ding, S. Molloi, X. Zhang and H. Gao, TICMR: Total image constrained material reconstruction via nonlocal total variation regularization for spectral CT, IEEE Trans. Medical Imaging, 35 (2016), 2578-2586. doi: 10.1109/TMI.2016.2587661. Google Scholar Y. Lou, X. Zhang, S. Osher and A. Bertozzi, Image recovery via nonlocal operators, J. Sci. Comput., 42 (2010), 185-197. doi: 10.1007/s10915-009-9320-2. Google Scholar F. Lucka, S. Pursiainen, M. Burger and C. H. Wolters, Hierarchical Bayesian inference for the EEG inverse problem using realistic FE head models: Depth localization and source separation for focal primary currents, Neuroimage, 61 (2012), 1364-1382. doi: 10.1016/j.neuroimage.2012.04.017. Google Scholar F. Natterer, The Mathematics of Computerized Tomography, John Wiley & Sons, Ltd., Chichester, 1986. doi: 10.1137/1.9780898719284. Google Scholar X. Pan, E. Y. Sidky and M. Vannier, Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?, Inverse Problems, 25 (2009), 36pp. doi: 10.1088/0266-5611/25/12/123009. Google Scholar G. Peyré, Image processing with nonlocal spectral bases, Multiscale Model. Simul., 7 (2008), 703-730. doi: 10.1137/07068881X. Google Scholar G. Peyré, S. Bougleux and L. Cohen, Non-local regularization of inverse problems, in ECCV 2008: Computer Vision, Lecture Notes in Computer Science, 5304, Springer, Berlin, Heidelberg, 2008, 57-68. doi: 10.1007/978-3-540-88690-7_5. Google Scholar E. T. Quinto, Singularities of the X-ray transform and limited data tomography in R2 and R3, SIAM J. Math. Anal., 24 (1993), 1215-1225. doi: 10.1137/0524069. Google Scholar J. Radon, Uber die bestimmug von funktionen durch ihre integralwerte laengs geweisser mannigfaltigkeiten, Berichte Saechsishe Acad. Wissenschaft. Math. Phys., Klass, 69 (1917). Google Scholar L. Rudin, S. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms, Phys. D, 60 (1992), 259-268. doi: 10.1016/0167-2789(92)90242-F. Google Scholar T. Schuster, B. Kaltenbacher, B. Hofmann and K. S. Kazimierski, Regularization Methods in Banach Spaces, Radon Series on Computational and Applied Mathematics, 10, Walter de Gruyter GmbH & Co. KG, Berlin, 2012. doi: 10.1515/9783110255720. Google Scholar W. P. Segars, G. Sturgeon, S. Mendonca, J. Grimes and B. M. W. Tsui, 4D XCAT phantom for multimodality imaging research, Med. Phys., 37 (2010), 4902-4915. doi: 10.1118/1.3480985. Google Scholar E. Y. Sidky and X. Pan, Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization, Phys. Med. Biol., 53 (2008). doi: 10.1088/0031-9155/53/17/021. Google Scholar A. M. Stuart, Inverse problems: A Bayesian perspective, Acta Numer., 19 (2010), 451-559. doi: 10.1017/S0962492910000061. Google Scholar S. J. Vollmer, Dimension-independent MCMC sampling for inverse problems with non-gaussian priors, SIAM/ASA J. Uncertain. Quantif., 3 (2015), 535-561. doi: 10.1137/130929904. Google Scholar G. Wang and H. Yu, The meaning of interior tomography, Phys. Med. Biol., 58 (2013), R161-186. doi: 10.1088/0031-9155/58/16/R161. Google Scholar J. P. Ward, M. Lee, J. C. Ye and M. Unser, Interior tomography using 1D generalized total variation. Part Ⅰ: Mathematical foundation, SIAM J. Imaging Sci., 8 (2015), 226-247. doi: 10.1137/140982428. Google Scholar J. Yang, H. Yu, M. Jiang and G. Wang, High-order total variation minimization for interior tomography, Inverse Problems, 26 (2010), 29pp. doi: 10.1088/0266-5611/26/3/035013. Google Scholar Z. Yao, Z. Hu and J. Li, A TV-Gaussian prior for infinite-dimensional Bayesian inverse problems and its numerical implementations, Inverse Problems, 32 (2016), 19pp. doi: 10.1088/0266-5611/32/7/075006. Google Scholar X. Zhang, M. Burger, X. Bresson and S. Osher, Bregmanized nonlocal regularization for deconvolution and sparse reconstruction, SIAM J. Imaging Sci., 3 (2010), 253-276. doi: 10.1137/090746379. Google Scholar X. Zhang and T. F. Chan, Wavelet inpainting by nonlocal toral variation, Inverse Probl. Imaging, 4 (2010), 191-210. doi: 10.3934/ipi.2010.4.191. Google Scholar D. Zhou and B. Schölkopf, Regularization on discrete spaces, in Joint Pattern Recognition Symposium, Lecture Notes in Computer Science, 3663, Springer, Berlin, Heidelberg, 2005,361-368. doi: 10.1007/11550518_45. Google Scholar Q. Zhou, W. Liu, J. Li and Y. M. Marzouk, An approximate empirical Bayesian method for large scale linear Gaussian inverse problems, Inverse Problems, 34 (2018), 18pp. doi: 10.1088/1361-6420/aac287. Google Scholar Figure 1. XCAT images: origin, ground truth and reference with different noise levels Figure Options Download as PowerPoint slide Figure 2. MAP results with sinogram noise level $ 5 $ Figure 3. MAP results with sinogram noise level $ 20 $ Figure 4. CM results with different sinogram data noise levels and references images. Upper: NLTG; Lower: TG Figure 5. The 95% confidence interval for different sinogram data noise level and references images. The range of the values is from 0 (black) to 100 (whitest). Upper: NLTG; Lower: TG Table 1. MAP results: PSNR and SSIM for different sinogram noise levels and reference images Noise Ref. FBP TV TGV TG NLTV NLTG 5 $ u_\mathrm{ref}^1 $ 13.30/0.21 18.98/0.60 21.21/0.71 29.08/0.88 29.69/0.87 30.71/0.91 $ u_\mathrm{ref}^2 $ 28.22/0.87 28.42/0.84 28.88/0.86 20 $ u_\mathrm{ref}^1 $ 9.40/0.06 15.66/0.46 18.26/0.48 23.10/0.54 23.92/0.78 24.72/0.79 Table Options Table 2. CM results: SSIM and PSNR for different level of Sinogram noise and reference PSNR SSIM Noise Ref. NLTG TG NLTG TG 5 $ u_\mathrm{ref}^1 $ 27.73 21.44 0.80 0.46 $ u_\mathrm{ref}^2 $ 27.90 20.95 0.66 0.40 20 $ u_\mathrm{ref}^1 $ 25.97 19.58 0.62 0.37 Xiaoqun Zhang, Tony F. Chan. Wavelet inpainting by nonlocal total variation. Inverse Problems & Imaging, 2010, 4 (1) : 191-210. doi: 10.3934/ipi.2010.4.191 Víctor Almeida, Jorge J. Betancor. Variation and oscillation for harmonic operators in the inverse Gaussian setting. Communications on Pure & Applied Analysis, 2022, 21 (2) : 419-470. doi: 10.3934/cpaa.2021183 Adriana González, Laurent Jacques, Christophe De Vleeschouwer, Philippe Antoine. Compressive optical deflectometric tomography: A constrained total-variation minimization approach. Inverse Problems & Imaging, 2014, 8 (2) : 421-457. doi: 10.3934/ipi.2014.8.421 Masoumeh Dashti, Stephen Harris, Andrew Stuart. Besov priors for Bayesian inverse problems. Inverse Problems & Imaging, 2012, 6 (2) : 183-200. doi: 10.3934/ipi.2012.6.183 Haijuan Hu, Jacques Froment, Baoyan Wang, Xiequan Fan. Spatial-Frequency domain nonlocal total variation for image denoising. Inverse Problems & Imaging, 2020, 14 (6) : 1157-1184. doi: 10.3934/ipi.2020059 Guillaume Bal, Ian Langmore, Youssef Marzouk. Bayesian inverse problems with Monte Carlo forward models. Inverse Problems & Imaging, 2013, 7 (1) : 81-105. doi: 10.3934/ipi.2013.7.81 Raymond H. Chan, Haixia Liang, Suhua Wei, Mila Nikolova, Xue-Cheng Tai. High-order total variation regularization approach for axially symmetric object tomography from a single radiograph. Inverse Problems & Imaging, 2015, 9 (1) : 55-77. doi: 10.3934/ipi.2015.9.55 Johnathan M. Bardsley. Gaussian Markov random field priors for inverse problems. Inverse Problems & Imaging, 2013, 7 (2) : 397-416. doi: 10.3934/ipi.2013.7.397 Andrew J. Majda, Michal Branicki. Lessons in uncertainty quantification for turbulent dynamical systems. Discrete & Continuous Dynamical Systems, 2012, 32 (9) : 3133-3221. doi: 10.3934/dcds.2012.32.3133 Jing Li, Panos Stinis. Mori-Zwanzig reduced models for uncertainty quantification. Journal of Computational Dynamics, 2019, 6 (1) : 39-68. doi: 10.3934/jcd.2019002 H. T. Banks, Robert Baraldi, Karissa Cross, Kevin Flores, Christina McChesney, Laura Poag, Emma Thorpe. Uncertainty quantification in modeling HIV viral mechanics. Mathematical Biosciences & Engineering, 2015, 12 (5) : 937-964. doi: 10.3934/mbe.2015.12.937 Alex Capaldi, Samuel Behrend, Benjamin Berman, Jason Smith, Justin Wright, Alun L. Lloyd. Parameter estimation and uncertainty quantification for an epidemic model. Mathematical Biosciences & Engineering, 2012, 9 (3) : 553-576. doi: 10.3934/mbe.2012.9.553 Ryan Bennink, Ajay Jasra, Kody J. H. Law, Pavel Lougovski. Estimation and uncertainty quantification for the output from quantum simulators. Foundations of Data Science, 2019, 1 (2) : 157-176. doi: 10.3934/fods.2019007 Tan Bui-Thanh, Omar Ghattas. A scalable algorithm for MAP estimators in Bayesian inverse problems with Besov priors. Inverse Problems & Imaging, 2015, 9 (1) : 27-53. doi: 10.3934/ipi.2015.9.27 Kui Lin, Shuai Lu, Peter Mathé. Oracle-type posterior contraction rates in Bayesian inverse problems. Inverse Problems & Imaging, 2015, 9 (3) : 895-915. doi: 10.3934/ipi.2015.9.895 You-Wei Wen, Raymond Honfu Chan. Using generalized cross validation to select regularization parameter for total variation regularization problems. Inverse Problems & Imaging, 2018, 12 (5) : 1103-1120. doi: 10.3934/ipi.2018046 Barbara Brandolini, Francesco Chiacchio, Cristina Trombetti. Hardy type inequalities and Gaussian measure. Communications on Pure & Applied Analysis, 2007, 6 (2) : 411-428. doi: 10.3934/cpaa.2007.6.411 Junxiong Jia, Jigen Peng, Jinghuai Gao. Posterior contraction for empirical bayesian approach to inverse problems under non-diagonal assumption. Inverse Problems & Imaging, 2021, 15 (2) : 201-228. doi: 10.3934/ipi.2020061 Tan Bui-Thanh, Quoc P. Nguyen. FEM-based discretization-invariant MCMC methods for PDE-constrained Bayesian inverse problems. Inverse Problems & Imaging, 2016, 10 (4) : 943-975. doi: 10.3934/ipi.2016028 T. J. Sullivan. Well-posed Bayesian inverse problems and heavy-tailed stable quasi-Banach space priors. Inverse Problems & Imaging, 2017, 11 (5) : 857-874. doi: 10.3934/ipi.2017040 Didi Lv Qingping Zhou Jae Kyu Choi Jinglai Li Xiaoqun Zhang
CommonCrawl
BMC Neuroscience Methodology article Technical considerations of a game-theoretical approach for lesion symptom mapping Melissa Zavaglia1,2, Nils D. Forkert1,3, Bastian Cheng4, Christian Gerloff4, Götz Thomalla4 & Claus C. Hilgetag1,5 BMC Neuroscience volume 17, Article number: 40 (2016) Cite this article Various strategies have been used for inferring brain functions from stroke lesions. We explored a new mathematical approach based on game theory, the so-called multi-perturbation Shapley value analysis (MSA), to assess causal function localizations and interactions from multiple perturbation data. We applied MSA to a dataset composed of lesion patterns of 148 acute stroke patients and their National Institutes of Health Stroke Scale (NIHSS) scores, to systematically investigate the influence of different parameter settings on the outcomes of the approach. Specifically, we investigated aspects of MSA methodology including the choice of the predictor algorithm (typology and kernel functions), training dataset (original versus binary), as well as the influence of lesion thresholds. We assessed the suitability of MSA for processing real clinical lesion data and established the central parameters for this analysis. We derived general recommendations for the analysis of clinical datasets by MSA and showed that, for the studied dataset, the best approach was to use a linear-kernel support vector machine predictor, trained with a binary training dataset, where the binarization was implemented through a median threshold of lesion size for each region. We demonstrated that the results obtained with different MSA variants lead to almost identical results as the basic MSA. MSA is a feasible approach for the multivariate lesion analysis of clinical stroke data. Informed choices need to be made to set parameters that may affect the analysis outcome. Lesion analysis reveals causal contributions of brain regions to mental functions, aiding the understanding of normal brain function and rehabilitation of brain-damaged patients. Historically, brain lesions were one of the few available sources of information by which functions of the human brain could be inferred. Although lesion inferences have made an enormous contribution to understanding the human brain and have laid the basis for attributing mental functions to specific brain regions [e.g. 1], they also have several drawbacks, such as the difficulty to infer function on the basis of the behavior of individual patients, the principal assumption of the localization of function, as well as the plasticity of the injured brain [2]. Today, a broad range of additional techniques exist to investigate the functions of the living brain through the correlation of behavior and cognition with brain activity, as shown by electrophysiology and functional imaging. In this context, lesion inferences, which link behavioral functions directly and causally to a neural substrate, still have an important role in modern neuroscience [2]. Despite the exciting promise of lesion inferences for identifying causal functional contributions, they are limited by several conceptual difficulties. Young et al. [3] discussed the potential unreliability of classical inference methods, such as single and double dissociations. In a single dissociation, a lesion of brain structure a disrupts function A but not function B; suggesting that functions A and B have some independence. Double dissociations arise when function A is disturbed by lesion of a and not of region b, while function B is disturbed by lesioning b and not a. This observation also leads to the conclusion of independent functions A and B, and their attribution to lesioned regions a and b, respectively [4]. The principal problem of single dissociations is the impossibility to assess if an apparently specific deficit arises from the impairment of a specific, localized process or from more general lesion impairment. In fact, a behavioral deficit after a lesion could be evidence for an interdependent hierarchy of functions in which the lesioned region plays a contributing role, rather than evidence for a localization of the function. Although double dissociations appear to represent a conceptual improvement over single dissociations in correctly ascribing functions to brain regions, they can also result in incorrect conclusions. Striking examples in this respect are so-called paradoxical lesion effects, such as the reversal of visual hemineglect during bilateral cortical or collicular inactivation in the cat brain [5]. An extensive variety of paradoxical lesion effects has been documented [6, 7], including two major types of paradoxical functional facilitation (PFF): restorative PFF and enhancing PFF. Restorative PFF arises when damage to intact brain tissue returns a previously sub-normal level of functioning back to normal. One example is the Sprague effect [8], where superior collicular lesions can result in a (partial) restoration of visual functioning following an initial visual cortical lesion of the contralateral hemisphere. By contrast, an enhancing PFF occurs when a subject with apparent nervous system pathology or sensory loss performs better than healthy control subjects on a particular task. These paradoxical effects arise because the brain is not just a collection of independent functional processors, but a complex system in which brain function emerges from the multiple interactions of distributed yet interlinked brain regions. Therefore, the conceptual problems linked to single or double dissociations also extend to higher-order inferences (e.g., triple dissociation), and it becomes apparent that, in a strict sense, the true causal contributions of brain regions to behavior can only be correctly inferred from evaluating all combinations of intact and injured brain regions together with their behavioral scores. In this context, a number of traditional strategies as well as current approaches for lesion inference, all computed on a voxel-by-voxel basis, were reviewed by Rorden and Karnath [2]; such as voxel-based morphometry (VBM) [9, 10], BrainVox [11], voxel-based lesion-symptom mapping (VLSM) [12], and voxel-based analysis of lesions (VAL) [13]. Kinkingnéhun et al. [14] introduced a method called anatomo-clinical overlapping maps (AnaCOM) which uses, in contrast to tradition statistical approaches for voxel-wise lesion behavior mapping (LBM, [15]), neurologically healthy individuals as control population instead of data from neurological patients [14, 15]. All statistical approaches mentioned above have, as a main drawback, the need to control for false positives [16]. Recently, Smith et al. [17] introduced a new approach, called multivariate pattern analysis (MVPA), to predict the presence or absence of spatial neglect from brain injury maps. Specifically, the authors used two machine-learning techniques, based on linear and nonlinear support vector machines (SVMs), to classify individuals based on structural brain scans with right hemisphere lesions. A recent study of ischemic stroke patients by Forkert et al. [18] demonstrated that information about stroke location (specifically, lesion overlap measurements) can improve the prediction of functional outcome (as measured by the modified Rankin Scale) by multiclass SVMs. Principally, lesion inference approaches can be classified on the basis of the univariate or multivariate nature of the used method. Multivariate analysis methods account for inherent dependencies among regions of interest. Such dependencies may lead to substantial mis-inferences of univariate analysis methods, as demonstrated by Mah et al. [19] through ground-truth lesion simulations. The majority of lesion mapping approaches to date has applied univariate regression models [20]. Multivariate approaches based on machine learning, such as MVPA, were introduced to lesion analyses by the work of Smith et al. [17]. Similarly, Zhang et al. [21] developed a multivariate lesion symptom mapping approach using a machine learning-based multivariate regression algorithm. The authors showed that, in comparison with VLSM, the new approach has higher sensitivity for identifying the lesion-behavior relations, both on synthetic and real datasets. A recent study by Corbetta et al. [22] proposed a multivariate approach based on machine learning to examine lesion-behavior relationships across multiple domains in a large sample of patients. In contrast to these approaches founded on machine learning, multi-perturbation Shapley value analysis (MSA) [23] has been suggested as an alternative inference approach based on game theory [24]. MSA represents a mathematical method to assess causal function localization from multiple perturbation data. It considers brain regions as network elements or players that interact in a game to achieve a particular behavior. The MSA computes the contributions of these elements and also their interactions from a dataset of multiple lesions. The MSA approach has been applied to neuroscience [25, 26], biochemistry, and genetics [27, 28]. In the context of clinical lesion analysis, Kaufman et al. [29] applied MSA to lesion data and line bisection test scores of 23 right hemisphere stroke patients. The study focused on 11 grey and white matter regions and used a predictor (specifically, a k-nearest neighbor algorithm) trained on the patient injury data to obtain the line bisection performance for all injury configurations. The approach revealed that among the 11 regions of interest, only four (the supramarginal and angular gyri of the inferior parietal lobule, the superior parietal lobule, the anterior part of the temporo-parietal junction, and the thalamus) had a pivotal contributing role for the given task. While this proof-of-principle paper demonstrated the practical applicability of MSA to clinical image data, it was based on a small sample of 23 cases and low-resolution CT images. In a recent paper by our group [24], we applied the MSA approach, in comparison with other lesion inference methods, to a large multi-centre dataset of stroke patient data, to investigate functional contributions of eight large-scale bilateral volumes of interest (VOIs). The dataset comprised lesion patterns of 148 acute stroke patients together with their neurological deficits, assessed by the National Institutes of Health Stroke Scale (NIHSS, [30]). The analysis, which revealed contributions to essential behavioral and cognitive functions particularly by subcortical structures, contributed to the interpretation of NIHSS in clinical practice as well as clinical trials. MSA is a novel computational method, which has not yet been explored in sufficient detail from a technical perspective for the application to stroke lesion inference. In the present study, we applied MSA to the same large sample of patient lesion data used previously [24], to investigate open methodological and technical aspects of the MSA approach in lesion inference and systematically determine the influence of different parameter settings on the results of this approach. More precisely, the main goals of this study were to, first, compute a sensitivity analysis on the parameters characterizing the preparation of the dataset for MSA, that is, lesion definition and lesion prediction, second, identify the most important parameters for this analysis, third, investigate MSA methodological variants and, finally, assess the suitability of MSA for processing real clinical lesion data. Behavioral and lesion image data For the present study, we used a large multi-centre dataset of stroke patients, described in a recent paper by our group [24]. The used data comprise a population of acute stroke patients included in a multi-centre observational study, designed to analyze the combined use of FLAIR (fluid attenuated inversion recovery MR imaging) and DWI (diffusion-weighted MR imaging) for identifying patients with acute ischemic stroke within 4.5 h of symptom onset (the PRE-FLAIR study [31, 32]). All patients in this study were studied within 12 h of witnessed stroke onset and severity of neurologic deficit on admission was assessed using the global NIHSS. The NIHSS, which is a rating scale resulting from a standardized neurological examination, quantifies symptom severity in acute stroke [30] by scoring 11 items representing specific abilities, with scores ranging between 0 (no symptoms, correct performance of task) and 2–4 (maximum symptom severity for corresponding item). These items include the: Level of Consciousness, Horizontal Eye Movement, Visual field, Facial Palsy, Motor Arm, Motor Leg, Limb Ataxia, Sensory, Language (Aphasia), Dysarthria, Extinction and Inattention. Higher NIHSS scores indicate a more severe impairment. A global score is calculated by summing up the individual score values. Details of imaging parameters and clinical characteristics for this study cohort were described previously [24]. Lesion image processing For quantitative lesion analysis, the same processing pipeline was used as described in [24]. Briefly, the lesions were segmented in each DWI dataset using a semi-automatic intensity-based method. After lesion definition, the 152 MNI brain atlas [33] was registered to each DWI dataset using a surface-based registration method. The resulting transformation was then used to transform the corresponding MNI atlas brain regions to each DWI dataset, which were then used for lesion overlap quantification (in %). The study focused on eight bilateral VOIs, defined by the MNI structural atlas: caudate (CAU), insula (IN), frontal (FR), occipital (OCC), parietal (PAR) and temporal lobes (TEM), as well as putamen (PUT), and thalamus (TH), which covered the whole brain. The overlap (in %) between each of the transformed 16 anatomical structural regions as defined in the MNI structural atlas and the patient-specific acute ischemic stroke lesion was calculated for each patient. The resulting dataset was composed of 148 patient cases with different patterns of lesioned VOIs (77 left-only, 72 right-only, one patient without lesions across all VOIs who was included in both hemisphere sets) and the corresponding global NIHSS values of the patients. Preliminary analysis As in Zavaglia et al. [24], we first employed two simple approaches of lesion inference to assess the relative lesion size and frequency, using Lesion Overlap and Median VOI Lesion Overlap. The first method simply calculates the percentage of patients who display a lesion in each voxel of the MNI brain atlas (in %). The second method is similar to the first one, but is applied to VOIs instead of single voxels. This approach shows the median overlap between the 16 anatomical MNI brain atlas regions and the patient-specific acute ischemic stroke lesions. For further details see Zavaglia et al. [24]. Multi-perturbation Shapley value analysis: general approach As an alternative to traditional inference approaches, the MSA is a rigorous method for assessing causal function localization from multiple perturbation data. It addresses the challenge of defining and calculating the contributions of network elements from a dataset of multiple lesions (multiple perturbation experiments) and their performance scores. It objectively quantifies not only the contributions of network elements, but also their interactions. MSA is based on coalitional game theory [34]. In general, the linked system elements (in this study, the volumes of interest) can be seen as players in a game, and a perturbation configuration represents a subset of elements, which are perturbed concomitantly. The set of elements, which are left intact, represents the coalition of players. For each configuration, the performance of the system (here, the inverse of the NIHSS), which can be seen as the worth of the coalition, is measured. The aim of the analysis is to assign values that represent the elements' contribution, or importance, for the overall function. The contribution of an element represents the worth of the coalition which contains the element, in relation to the worth of coalitions that do not contain it. The formal procedure is described below. If we consider a system composed of N = {1, … , n} elements that perform a task, we can define a coalition S, where S ⊆ N, and a performance score v(S), which is a real number representing the performance measured for the perturbation configuration in which all the elements in S are intact and the rest perturbed. The definite value in game theory and economics for this type of coalition game is the Shapley value [34]. The marginal importance of player i to a coalition S, with i ∉ S, is represented in Eq. 1 $$\Delta_{i} \left( S \right) = v(S \cup \left\{ i \right\}) - v\left( S \right)$$ The Shapley value of each player i ∈ N is defined by Eq. 2 where \({{\mathcal{R}}}\) is the set of all n! orderings of N and S i (R) is the set of players preceding i in the ordering R. If we assume that all the players are arranged in some order, all orders being equally likely, the Shapley value can be interpreted as the marginal importance of a player i to the set of players that precede him. We define a configuration to be an indicator vector for the set of unperturbed elements, that is a binary vector of length n, with c i = 1 if i ∈ S or c i = 0 if i ∉ S. For a detailed, formal description of MSA see [23]. $$\gamma_{i} \left( {N,v} \right) = \frac{1}{n!}\mathop \sum \limits_{{R \in {\mathcal{R}}}} \Delta_{i} \left( {S_{i} \left( R \right)} \right)$$ When all possible 2n perturbation configurations are known, the Shapley value can be computed either using Eq. 2 where the summation runs over all n! ordering of N, or it can be computed as a summation over all 2n configurations, properly weighted by the number of possible ordering of the elements (Full Information MSA). Multi-perturbation Shapley value analysis: method variants Predicted MSA Frequently, the complete set of performance scores for all combinations of the binary states of a set of regions required for MSA is not available, due to the difficulty of experimentally accessing all perturbation configurations. In those cases, a prediction model trained on the available set of configurations and performance scores can be used to predict the performance scores corresponding to all binary configurations. In this study, a total of 2n = 256 binary lesion configurations existed (where each VOI can be lesioned, "0", or intact, "1", and n = 8 is the number of VOIs for each hemisphere) and correspondingly, 256 performance scores were required for the MSA (see "Multi-perturbation Shapley value analysis: general approach" section). However, the original input dataset used in this work was composed of only 77 graded lesion configurations (describing % lesion overlap) and corresponding performance scores for the eight left VOIs and 72 graded lesion configurations and corresponding performance scores for the eight right VOIs. Therefore, we used a machine learning model trained on the available input dataset (see "Multi-perturbation Shapley value analysis: prediction of unknown performance scores" section for details) to predict the performance scores of all possible 2n = 256 binary lesion configurations. After the prediction, all performance scores corresponding to the required 2n binary configurations for each hemisphere were available, and it was possible to compute the Shapley value with the full information MSA (predicted MSA). Estimated MSA In studies where the number of system elements is too large to enumerate all configurations in a straightforward manner, the MSA can alternatively sample orderings and calculate an unbiased estimator of the contribution of each element (estimated MSA) as shown in Eq. 3, where \({\hat{\mathcal{R}}}\) represents a randomly sampled set of permutations. $$\hat{\gamma }_{i} \left( {N,v} \right) = \frac{1}{{\left| {{\hat{\mathcal{R}}} } \right|}}\mathop \sum \limits_{{R \in {\hat{\mathcal{R}}} }} \Delta_{i} \left( {S_{i} \left( R \right)} \right)$$ It is important to note that a perturbation configuration may appear in different permutations. Therefore, the number of new configurations (and corresponding performance scores) for each sampled permutation decreases as more permutations are sampled [26]. The multi-perturbation configurations that are used in the estimated Shapley value method depend on the sampled orderings. However, the available dataset of performance measures for some set of perturbation configurations does not necessarily match the ones corresponding to a random permutation-configuration sample. In this case, as for the predicted MSA, a prediction model is trained using the available given set of perturbation configurations and corresponding performance scores and it is used to predict the performances for any perturbation configuration generated by the sampled permutations (estimated MSA). In studies where the number of VOIs (n) is relatively small, there is no considerable advantage in using the estimated MSA, but where the number of VOIs is much larger, its use becomes fundamental, because the number of configurations and corresponding performance scores generated by the sampled permutations is much smaller than 2n. The estimated MSA approach is suitable for large networks consisting of up to 100 network elements [26]. Multi-perturbation Shapley value analysis: prediction of unknown performance scores Choice of machine learning predictor There is no pre-specified prediction model for MSA, and it is an important task to select the best-suited algorithm for predicting unknown performance scores on the basis of the specific configuration of available performance values. This aspect is important, because it may affect the results of the subsequent MSA analysis. Among a large number of available classifiers, one can consider, for example, naive Bayes classifiers, regression trees, nearest neighbor algorithms, random forests, and support vector machines (SVM). The k-nearest neighbor (k-NN) approach has been used before for a similar purpose by Kaufman et al. [29], who applied it in their MSA study on spatial neglect patients. The k-NN method is a relatively simple interpolation algorithm in which an object is assigned a value based on the classes (i.e., functional performance score) of its k nearest neighbors, for instance, based on Euclidean distance. Alternatively, support vector machines have been found to be very powerful, especially in case of non-linear classification problems. Support vector machines can be applied to classification or regression problems [35]. Specifically, a supervised learning algorithm infers a function from labeled training data consisting of a set of training examples. The inferred function can be used for mapping new examples; that is, the algorithm can determine the class labels for unseen instances. In the present approach, we predicted performance scores space using the multiclass SVM (where the number of categories is larger than two) implemented in LIBSVM [36]. A sensitivity analysis on the settings of the SVM (i.e., its kernel function) was performed to select the best-suited parameters for our dataset. Original-graded versus thresholded-binary dataset Two options were investigated to predict the performance of the 256 binary configurations, using either the original-graded dataset or alternatively a thresholded-binary dataset (where "0" is lesioned and "1" is intact, created from the graded dataset after thresholding) as training dataset. Both strategies have their inherent advantages and drawbacks. On the one hand, using the original-graded dataset utilizes the information of the original data in the training without information loss due to binarization. However, the training and test data do not have the same features, since the predictor is trained with graded data and then tested with binary data (256 binary lesion configurations required by the MSA). On the other hand, using a binary dataset (after thresholding) for training, the type of training and test data is identical, at the cost of major drawbacks: the necessary choice of an arbitrary threshold for the binarization, and the consequent loss of information. Particularly, after binarization, the number of unique configurations changes, because some graded configurations become equal to each other at a binary level, but have different associated behavioral scores and also different graded lesion patterns. The binarization threshold may be global, that is, using a unique value for all VOIs, or individual, that is, computed individually for each VOI. In our study, we explored both approaches. As a first alternative, we binarized the original-graded dataset at different global thresholds by defining each VOI as lesioned ("0") or intact ("1") depending on whether relative lesion size was larger or smaller than a given global threshold. Using a leave-one-out cross validation scheme, we trained the SVM respectively with the original-graded dataset graded from 0 to 1, where "0" represents a complete lesion and "1" a completely intact region (since the original overlap values are defined opposite we had to calculate values as 1—original-graded dataset), and with the corresponding thresholded-binary dataset at different thresholds. Each SVM model was evaluated with leave-one-out cross validation, using the thresholded-binary dataset obtained with different thresholds. In this way, for each binarization threshold and for both types of training dataset (original-graded or thresholded-binary), we computed the root mean squared error (RMSE) in terms of the difference between the real and the predicted performance score. We focused on the SVM variants with the lowest RMSE for both types of training. In turn, the same analysis was repeated for three individual thresholds, computed respectively as the first (0.25), second (0.5, median threshold) and third (0.75) quartile of the non-zero relative lesion size individually for each (individual) VOI. For the individual thresholds, we also computed a measure of accuracy. Specifically, we considered the prediction successful when the absolute value of the difference between the predicted and the real performance score was not larger than a maximum tolerance error (=3). Lesion definition Lesion size and NIHSS Figure 1 represents the relative lesion size and the associated global NIHSS together with color bars indicating the range of variation in lesion size and NIHSS, respectively. The NIHSS is graded from 0 to 21, where zero means that the patient shows no behavioral deficit and a score of 21 indicates the severest impairment (the range is given for the current sample; higher NIHSS scores are possible). The figure demonstrates the segregation into left- and right-hemispheric lesions and the correlation of lesion size with NIHSS. Large structures, such as the cortical lobes, typically only have small relative lesions, while relative lesion sizes are larger for small (subcortical) structures. Percentage of lesioned voxels in 16 VOIs, and associated global NIHSS values for 148 patients. The color scales indicate the range of lesioned voxels (graded from 0 to 100%) and the range of NIHSS values (graded from 0 to 21) Lesion overlap and median VOI lesion overlap Figure 2 shows the outcomes of the preliminary analysis approaches described in "Preliminary analysis" section, applied to the left and right lesions dataset separately and represented in the reference space of the MNI atlas (top row). The first, simple and straightforward assessment of the data is by the relative Lesion Overlap, shown in the second row on the MNI brain atlas (using neurological convention). This representation shows that all VOIs involved in the study are damaged to different extent. Due to these overlapping lesion patterns, it is difficult to establish a simple relationship between the lesions and the behavioral scores (NIHSS). As a second method to visualize the lesioned regions, we used Median VOI Lesion Overlap, Fig. 2, third row, showing the relative (percentage) infarction computed individually for each VOI. This method clearly indicates that, on a relative scale, the subcortical regions are most affected, especially in the right hemisphere. In Fig. 3 for each VOI in the left and right hemisphere, we also plotted the relative percentage of lesioned voxels for all patients with the corresponding median VOI infarction. Illustration of MNI atlas (by three representative slices of the MNI atlas covering all structural regions), lesion overlap and Median VOI lesion overlap, in neurological convention. While the lesion overlap focuses at the scale of voxels, Median VOI lesion overlap shows the relative (median percentage) infarction within the confines of the predefined 2 × 8 VOIs. The color map is the same for all measures, but at different scales. See also [24] Percentage of lesioned voxels (grey) and median values (black) for each VOI in the left (a) and right (b) hemispheres respectively Lesion prediction Original-graded versus thresholded-binary dataset and linear- versus RBF-kernel SVM: global threshold of binarization In order to assess the drawbacks and advantages of the two types of training sets, we present both prediction options with the corresponding results. In Fig. 4, we show the changes of RMSEs for left and right VOIs separately, obtained with thresholded-binary and original-graded training for two SVM classifiers, depending on the global threshold of binarization. Specifically we used a linear kernel (kernel type t = 0) SVM, with parameters set to default value (SVM type s = 0, cost c = 1), and a radial basis function (RBF) kernel (kernel type t = 2), with all parameters set to default value except for the gamma parameter (SVM type s = 0, cost c = 1, gamma g = 1). In panel e and f of Fig. 4, the number of unique configurations after the binarization process is illustrated. It becomes apparent that it is not straightforward to select a global threshold of binarization that gives the best results for both types of trainings, particularly because it is not sufficient to consider the RMSE, as also the numbers of unique (useful) configurations after binarization have to be taken into account (as anticipated in "Original-graded versus thresholded-binary dataset" section). A threshold of 50 %, for example, yields a good RMSE, but, at the same time, only 4 and 6 unique configurations are available for the left and right hemisphere, respectively, since at this threshold, only bilateral putamen and insula have lesioned elements. In this context, an area-specific threshold for binarization, specifically and separately tailored for each VOI despite the variation in relative lesion size, could represent a good alternative. RMSE functions for SVM linear kernel (a, b) and SVM radial basis function (RBF) kernel (c, d) for both left and right hemispheres, depending on the global thresholds. Numbers of unique configurations for left and right damaged patients, depending on the global-threshold (e, f). The RMSE functions for the binary training are represented in black, and for the original training in dashed grey. The values of the RMSEs computed with the median-threshold binarization are reported here for comparison (Left SVM linear kernel: original-graded training RMSE = 5.5561, thresholded-binary training RMSE = 5.6454; SVM RBF kernel: original-graded training RMSE = 5.344, thresholded-binary training RMSE = 5.9467. Right SVM linear kernel: original-graded training RMSE = 5.3307, thresholded-binary training RMSE = 5.9663; SVM RBF kernel: original-graded training RMSE = 5.1841, thresholded-binary training RMSE = 5.8949) Original-graded versus thresholded-binary dataset: individual threshold of binarization Table 1 shows the results on the RMSE and accuracy (maximum tolerance error = 3) computed with individual thresholds, for left- and right-damaged patients datasets respectively, with both types of training and with the linear-kernel SVM (for brevity, we did not report the results of the RBF-kernel SVM). The median value threshold represents a good compromise between RMSE and accuracy, for both graded and binary training, as well as the number of useful configurations. Moreover, the classification accuracies obtained with the median threshold are considerably higher than the statistical chance levels, which were computed in the same way as prediction accuracy (maximum tolerance error ±3), but instead of the predicted scores we used NIHSS values that were randomly permutated for all patients. We repeated this procedure 100 times and obtained a mean chance level accuracy of 36 % for left regions and 34 % for right regions. Table 1 RMSE and accuracy (%) for SVM linear kernel prediction for both left and right hemispheres, computed with both original-graded and thresholded-binary training Training of the SVM using the original-graded values instead of the thresholded-binary values intuitively promises a better prediction, due to making fuller use of the continuously defined variables. Indeed, RMSE and accuracy for graded training were slightly better than for binary training. However, this procedure incurs the problem that the training data and the predicted data are of different types (graded versus binary lesions), which leads to biases in the prediction. Specifically, the prediction of NIHSS based on graded lesion patterns led to a discontinuous spread of predicted values (cf. Fig. 7 as well as Figs. 5 and 6). While the RMSE and accuracy measures suggested that these values were within a useful range, the apparent artificial distribution of the values indicated a prediction bias. For this reason, for subsequent analyses we preferentially used a thresholded-binary dataset for the training of a linear-kernel SVM, where the binarization was implemented through a median threshold for each VOI. Figure 7 shows predicted performance scores obtained with thresholded-binary and original-graded training. Specifically, we represent the 256 binary configurations required for the MSA [sorted from all lesioned (blue = 0) to all intact (red = 1)], with the corresponding mean value of the performance scores. The performance scores were predicted with the linear-kernel SVM, for left and right VOIs, based on both thresholded-binary and original-graded dataset training and the leave-one-out cross validation. The color map range of predicted scores in the color bars is the same for panels (a) and (b). It is interesting to note that there is a substantial difference between left and right brain regions. For right regions, the predicted scores tend to vary in a smaller range compared to left ones. This observation can be ascribed to the fact that the scores are based on a battery of functions mostly associated with the left hemisphere. Moreover, it is clear that the binary training, for both hemispheres, leads to a more even spread of predicted scores, while with the original-graded training, the predicted scores tend to exhibit only a few values of the possible global NIHSS range, and the correlation between the number of intact regions and the performance score is less evident. Representation of the error for left brain damaged patients computed with the leave-one-out approach with original-graded training and binary testing (a) and with thresholded-binary training and binary testing (b). The binarization is made with the median individual-threshold Representation of the error for right brain damaged patients computed with the leave-one-out technique with original-graded training and binary testing (a) and with thresholded-binary training and binary testing (b). The binarization is made with the median individual-threshold 256 binary state configurations required for the MSA, sorted from all-intact (red 1) to all-lesioned (blue 0) and corresponding mean predicted scores for left and right VOIs, obtained with thresholded-binary (a) and original-graded (b) training. The color map scale is the same for both panels MSA contributions MSA: general consideration In this section, we present the main results obtained with the MSA analysis applied separately to left and right VOIs. Specifically, we show the differences between contribution values obtained with thresholded-binary and original-graded training, between SVM classifiers (radial basis function or linear kernel), and between MSA variants (predicted MSA and estimated MSA). Predicted MSA: original-graded versus thresholded-binary dataset and linear- versus RBF-kernel SVM Here, we show the normalized mean MSA contribution values for the inverse NIHSS, using the linear and the RBF-kernel SVM, with thresholded-binary and original-graded training datasets, to compute unknown performance scores (predicted MSA). As left- and right-hemispheric lesions were strictly separated in the present patient sample, contributions of VOIs were computed separately for the left and right hemisphere. Standard deviation bars are derived from the leave-one-out cross validation during the prediction of performance scores. Specifically, we predicted the 256 unknown scores 77 times (for left-brain VOIs) and 72 times (for right-brain VOIs) and consequently computed the same number of MSA contribution values. Positive contribution values indicate that regions contribute positively to the performance of a task. Thus, if they are lesioned, the performance is lowered. By contrast, a negative contribution value means that the lesioning of these regions is beneficial for the performance of the task. We also show the normalized contribution values obtained with the estimated MSA, for the binary training and linear kernel SVM. In Fig. 8, we show the comparison between the contributions derived from the training with the original-graded and thresholded-binary dataset using the linear kernel SVM, for left and right VOIs respectively. For the binary training, subcortical regions, such as caudate and left insula, together with parietal and frontal lobes, were inferred to make the strongest contributions to brain functions reflected by the NIHSS. Moreover, all contributions (except for the right temporal lobe) were significantly different from zero, with negative contributions coming from right putamen and left thalamus (same results shown in [24]). Both for left and right VOIs, there were differences between the two training methods. For right VOIs the difference is more evident, especially for putamen and insula that seem to have a much stronger effect when the original-graded training is used. Comparison between mean contribution values obtained with original-graded (grey) and thresholded-binary (black) training, for left and right VOIs respectively, with the linear kernel SVM. Rank correlation between the contribution values of the alternative approaches is ρ = −0.21 (p-value = 0.62) in a and ρ = −0.05 (p-value = 0.93) in b In Fig. 9 we show the same quantities as Fig. 8, but generated using the radial basis function kernel SVM. The difference between the two training methods for the left VOIs is less evident than for right VOIs. Interestingly, the results obtained with the two SVM kernels do not differ considerably. We computed the rank correlation between the eight mean contribution values obtained with thresholded-binary training, with linear and RBF kernel: for left VOIs ρ = 0.42 (p-value = 0.30) and for right VOIs ρ = 0.55 (p-value = 0.17). We also computed the rank correlation between the eight mean contribution values obtained with original-graded training, with linear and RBF kernel: for left VOIs ρ = 0.17 (p-value = 0.7) and for right VOIs ρ = 0.78 (p-value = 0.03) . Comparison between mean contributions obtained with original-graded (grey) and thresholded-binary (black) training in left and right VOIs respectively, with the rbf kernel SVM. Rank correlation between the contribution values of the alternative approaches is ρ = 0.81 (p-value = 0.02) in a and ρ = 0.46 (p-value = 0.26) in b Estimated MSA: thresholded-binary dataset and linear-kernel SVM Figure 10 shows the contribution values obtained with the estimated MSA after simulations of 100 orderings of the set of multi-perturbation experiments. We used the linear kernel SVM with thresholded-binary training (individual median threshold) to predict the performance scores for all the perturbation configurations dictated by the sampled permutations. The output shows contributions which are almost identical to the contributions obtained with the predicted MSA (rank correlation is ρ = 0.98 for left VOIs and ρ = 0.95 for right VOIs, see also black contributions in Fig. 8). Also, smaller numbers of orderings (i.e., 50) work quite well (results not shown). In this study, where the number of VOIs is relatively small, there is no big advantage in using the estimated MSA, but in studies where the number of VOIs is much larger, its use becomes essential (see "Estimated MSA" section). Normalized contribution values obtained with estimated MSA (100 orderings), computed with thresholded-binary training in left and right VOIs respectively, with the linear kernel SVM. Rank correlation between estimated MSA and predicted MSA (black bars in Fig. 8) is ρ = 0.98 (p-value = 0.0004) for left VOIs and ρ = 0.95 (p-value = 0.0011) for right VOIs In this paper, we investigated in detail the MSA methodology and showed its application to a large dataset of stroke patients. The dataset was used in a previous study by our group [24], but here we focused on clarifying methodological and technical questions of the MSA approach in lesion inference and identifying the most important parameters for the analysis. In particular, we investigated parameters involved in the preparation of the dataset for MSA, such as the binarization threshold (global or individual) for the graded lesion dataset, the kernels of the predictor and the consequent best-suited training set for the prediction of unknown behavioral scores. We also investigated MSA methodological variants, such as the estimated MSA, that may be relevant for future studies involving more finely resolved ROIs. Data conditioning, choice of training set and predictor One of the main technical limitations of the classic MSA method is the preparation of a complete lesion dataset in binary format, consisting of functional scores for all possible configurations of intact and lesioned states of VOIs, as required by the algorithm. The full information MSA approach requires complete functional information for all 2n binary brain state configurations, where n is the number of regions. Thus, ideally, one would have available performance values for 2n patients, whose lesion patterns are all different from each other. In clinical practice, this requirement is unrealistic. Even the present extensive study, which included 77 left- and 72 right-brain damaged patients, did not reach the required number of 256 distinct cases for each hemisphere, and a predictor had to be trained to obtain the full set of performance scores corresponding to the 2n possible binary lesion configurations (predicted MSA). Moreover, in addition to the choice of the predictor (see "Choice of machine learning predictor" section) an extensive study of the best-suited training dataset is necessary. We investigated two training sets: the original-graded dataset and the thresholded-binary dataset. In clinical practice, lesion information at the VOI scale is not a binary category indicating if the region is entirely lesioned or intact, but is provided as a graded percentage of lesioned voxels of a region. In this context, it is important to note that using the original-graded dataset for training utilizes the information of the original data in the training without loss due to binarization. However, the training and test data do not have the same features, since the predictor is trained with graded data and then tested with binary data (256 binary lesion configurations required for the MSA). By using a binary dataset (after thresholding) for training, the type of training and the test data are identical, but the approach involves the choice of a threshold for the binarization, as well as the potential loss of information from thresholding. Particularly, after binarization the number of unique configurations may change, because some graded configurations become equal to each other at the binary level (i.e., these configurations are collapsed into each other), but have different associated behavioral scores and also different graded lesion patterns. Each of the previous steps has to be carefully considered, since there is no ideal, objective method for binarizing graded data, and no predefined predictor for generating missing behavioral data. In the present study, we systematically investigated the binarization threshold and the parameters of the machine learning predictor (focusing on SVMs with linear or radial basis function kernel). In order to find the best solution for the present dataset, we performed a sensitivity analysis on global and individual thresholds for binarization and compared the results of the prediction with both original-graded training and thresholded-binary training in terms of accuracy, computed by minimizing the error in the prediction of behavioral scores. The training of the SVM using the original-graded values appeared to show a better prediction, due to the use of the continuously defined variables, as confirmed by RMSE and accuracy, which were slightly better than for thresholded-binary training. However, the use of training data of different type from the testing data led to a discontinuous spread of predicted values, as shown in Fig. 7. The results computed with MSA by means of a thresholded-binary dataset for the training of a linear kernel SVM (threshold = median value of all non-zero percentages of lesioned voxels) showed that contributions were all significantly different from zero (with the exception of the right temporal lobe) and that subcortical regions, such as bilateral caudate, and insula, together with the parietal and frontal lobe, were inferred to make the strongest contributions to essential brain function as reflected by the NIHSS. Interestingly, MSA revealed also negative contributions, specifically from the right putamen and left thalamus (for interpretation see also [24]). The comparison of contributions value obtained with the original-graded training instead of thresholded-binary training yielded some differences, especially for the linear kernel SVM. Interestingly, the results obtained with the radial basis function SVM kernel showed no substantial differences to those obtained with linear SVM kernel, especially for right VOIs (see rank correlations in "Predicted MSA: original-graded versus thresholded-binary dataset and linear- versus RBF-kernel SVM" section). Considering the differences using the two training methods, general recommendations can be derived. If the accuracies computed with original-graded and thresholded-binary training sets are similar and the dataset analyzed is small (i.e., composed of a small number of cases), the thresholding could cause significant loss of information due to the collapsing of configurations, and it would be preferable to choose the original-graded set to train the predictor and obtain the full set of performance scores. However, if the dataset analyzed is large, as for the data presented here, it is feasible to choose the thresholded-binary set as we did. MSA variants: advantages and drawbacks The main drawback of the classic full information MSA is the need for 2n performance scores corresponding to the binary configurations (here 256). These numbers quickly increase with the number of elements of interest, requiring, for example, 1024 configurations for ten VOIs. For this reason, the application of the standard full information predicted MSA is limited to a small number of VOIs that need to be carefully selected. The number of brain regions that can be investigated properly with the standard MSA approach depends on the available sample size, but is typically limited to around ten brain regions, given the typical sample sizes of large multi-centre stroke studies. Otherwise, the number of unknown lesion configurations and consequently the number of behavioral scores that need to be predicted grows too large, which also represents a considerable limitation for any machine learning technique. Frequently however, a resolution of about ten regions of interest provides a meaningful scope for the interpretation of lesion findings. In this context we also investigated a variant of MSA, the estimated MSA, which is useful in studies where the number of system elements is too large to enumerate all configurations in a straightforward manner, such as studies focusing on many small VOIs or even single voxels. This MSA variant is computationally convenient, because it can sample orderings and calculate an unbiased estimator of the contribution of each element. MSA variants were already presented by Keinan et al. [26] where the authors focused on the analysis of large complex networks. Keinan et al. showed that the estimation and prediction variants successfully allowed the analysis of several neurocontrollers consisting of up to 100 neural elements. The contributions we obtained in the present study with estimated MSA were almost identical to the ones obtained with predicted MSA, and this result is encouraging in the perspective of analysing a larger number of more finely resolved anatomical or functional brain regions. In this context, it may eventually even be possible to apply the MSA for lesion inferences at the voxel-level. In doing so, it would no longer be necessary to use thresholds for binarization, since lesions at the voxel level produce binary states of lesioned and intact elementary nodes. However, such a feasibility analysis of maximum spatial resolution is beyond the scope of the present study and subject to future investigations. MSA versus other multivariate lesion inference approaches How does MSA differ to other multivariate lesion inferences? Another multivariate approach is multi-area pattern prediction (MAPP) [24], which is based on SVM and offers a way of comparing MSA and MVPA [17] strategies. While not identical to MVPA, MAPP operates in the same spirit, by computing the leave-one-out cross-validation with n different datasets (n = number of regions), obtained respectively by removing each single region one at a time. In this way, we can measure how important each region is for the prediction procedure (i.e., by its individual contribution to the prediction error [24]). Similar to MSA, MAPP makes use of SVM in order to compute the RMSE in the leave-one-out cross-validation procedure. Like MSA, it is also applied to the thresholded-binary dataset with the corresponding performance scores, but in contrast to MSA, it does not require the full set of lesion configurations with associated performance scores. It is also important to mention that controlling for total lesion volume can have a considerable impact on the lesion-symptom mapping approach [21]. In fact considering lesion size can be especially important to separate the specific effects of damage to a particular voxel from effects resulting from the generally higher damage likelihood in patients with large lesions compared to patients with small lesions. In this context Mirman et al. [37] reported similar anatomical results for univariate and multivariate approaches after accounting for lesion size. An exhaustive numerical comparison between MSA and MAPP, or other recent multivariate approaches (i.e. SVR-LSM [21]), should be based on ground-truth simulations [19], which is beyond the scope of the present project. The MSA approach provides a new, principled method for the objective, multivariate computation of regional causal contributions to brain function. The approach reveals characteristic contribution patterns for behavioral and cognitive functions based on clinical scores and may provide useful guidance for rehabilitation. The method requires conditioning of the data, and we showed here that some parameters are crucial in the analysis: the threshold for binarization of graded lesion patterns, the choice of the algorithm for predicting unknown behavioral scores, and the choice of the training set. We demonstrated that there are no particular predictors or thresholds for the binarization that generally perform better than others. The choices of these settings are subjective, but we provide some general recommendations which, when considered in combination with a sensitivity analysis on these parameters, can be helpful for finding the best approach for given datasets. In general, the results presented here are still preliminary, but indicate how MSA may allow building a matrix of causal functional contributions and provides useful guidance for rehabilitation. multi-perturbation Shapley value analysis NIHSS: National Institutes of Health Stroke Scale SVM: VBM: voxel-based morphometry VLSM: volume-based lesion symptom mapping VAL: voxel-based analysis of lesions AnaCOM: anatomo-clinical overlapping maps LBM: lesion behavior mapping MVPA: multivariate pattern analysis VOI: volume of interest FLAIR: fluid attenuated inversion recovery DWI: diffusion weighted imaging MRI: k-NN: k-nearest neighbor RMSE: root mean squared error RBF: radial basis function mRS: modified Rankin Scale MAPP: multi-area pattern prediction Broca P. Remarque sur le siège de la faculté du langage articulé, suivies d'une observation d'aphemie (perte de la parole). Bull Soc Anat Paris. 1861;6:330–57. Rorden C, Karnath HO. Using human brain lesions to infer function: a relic from a past era in the fMRI age? Nat Rev Neurosci. 2004;5:813–9. Young MP, Hilgetag CC, Scannell JW. On imputing function to structure from the behavioural effects of brain lesions. Philos Trans R Soc Lond B Ser B Biol Sci. 2000;355:147–61. Teuber HL. Physiological psychology. Annu Rev Psychol. 1995;6:267–96. Lomber GL, Payne BR. Removal of two halves restores the whole: reversal of visual hemineglect during bilateral cortical or collicular inactivation in the cat. Vis Neurosci. 1996;13:1143–56. Kapur N. Paradoxical functional facilitation in brain-behaviour research. A critical review. Brain J Neurol. 1996;119:1775–90. Kapur N. The paradoxical brain. Cambridge: Cambridge University Press; 2011. Sprague J. Interaction of cortex and superior colliculus in mediation of visually guided behavior in the cat. Science. 1966;153:1544–7. Wright IC, McGuire PK, Poline JB, Travere JM, Murray RM, Frith CD, Frackowiak RS, Friston KJ. A voxel-based method for the statistical analysis of gray and white matter density applied to schizophrenia. NeuroImage. 1995;2:244–52. Ashburner J, Friston KJ. Voxel-based morphometry—the methods. NeuroImage. 2000;11:805–21. Frank RJ, Damasio H, Grabowski TJ. Brainvox: an interactive, multimodal visualization and analysis system for neuroanatomical imaging. NeuroImage. 1997;5:13–30. Bates E, Wilson SM, Saygin AP, Dick F, Sereno MI, Knight RT, Dronkers NF. Voxel-based lesion-symptom mapping. Nat Neurosci. 2003;6:448–50. Rorden C, Brett M. Stereotaxic display of brain lesions. Behav Neurol. 2000;1:191–200. Kinkingnéhun S, Volle E, Pélégrini-Issac M, Golmard JL, Lehéricy S, Du Boisguéheneuc F, Zhang-Nunes S, Sosson D, Duffau H, Samson Y, Levy R, Dubois B. A novel approach to clinical-radiological correlations: anatomo-clinical overlapping maps (AnaCOM): method and validation. NeuroImage. 2007;37:1237–49. Rorden C, Fridriksson J, Karnath HO. An evaluation of traditional and novel tools for lesion behavior mapping. NeuroImage. 2009;44:1355–62. Yekutieli D, Benjamini Y. Resampling-based false discovery rate controlling multiple test procedures for correlated test statistics. J Stat Plan Infer. 1999;82:171–96. Smith DV, Clithero JA, Rorden R, Karnath HO. Decoding the anatomical network of spatial attention. Proc Natl Acad Sci USA. 2013;110:1518–23. Forkert ND, Verleger T, Cheng B, Thomalla G, Hilgetag CC, Fiehler J. Multiclass support vector machine-based lesion mapping predicts functional outcome in ischemic stroke patients. PLoS One. 2015;10(6):e0129569. doi:10.1371/journal.pone.0129569. Mah YH, Husain M, Rees G, Nachev P. Human brain lesion deficit inference remapped. Brain J Neurol. 2014;137:2522–31. Karnath HO, Smith DV. The next step in modern brain lesion analysis: multivariate pattern analysis. Brain J Neurol. 2014;137:2405–7. Zhang Y, Kimberg DY, Coslett HB, Schwartz MF, Wang Z. Multivariate lesion-symptom mapping using support vector regression. Hum Brain Mapp. 2014;35:5861–76. Corbetta M, Ramsey L, Callejas A, Baldassarre A, Hacker CD, Siegel J, Astafiev SV, Rengachary J, Zinn K, Lang C, Connor L, Fucetola R, Strube M, Carter A, Shulman G. Common behavioral clusters and subcortical anatomy in stroke. Neuron. 2015;85:927–41. Keinan A, Sandbank B, Hilgetag CC, Meilijson I, Ruppin E. Fair attribution of functional contribution in artificial and biological networks. Neural Comput. 2004;16:1887–915. Zavaglia M, Forkert ND, Cheng B, Gerloff C, Thomalla G, Hilgetag CC. Mapping causal functional contributions derived from the clinical assessment of brain damage after stroke. NeuroImage Clin. 2015;9:83–94. Keinan A, Kaufman A, Sachs N, Hilgetag CC, Ruppin E. Fair localization of function via multi-lesion analysis. Neuroinformatics. 2004;2:163–8. Keinan A, Sandbank B, Hilgetag CC, Meilijson I, Ruppin E. Axiomatic scalable neurocontroller analysis via the Shapley value. Artificial Life. 2006;12:333–52. Kaufman A, Keinan A, Meilijson I, Kupiec M, Ruppin E. Quantitative analysis of genetic and neuronal multi-perturbation experiments. PLoS Comput Biol. 2005;1:e64. Kaufman A, Kupiec A, Ruppin E. Multi-knockout genetic network analysis: the Rad6 example. In: Proceedings of the 2004 IEEE computational systems bioinformatics conference (CSB). 2004. p. 1–9. Kaufman A, Serfaty C, Deouell LY, Ruppin E, Soroker N. Multiperturbation analysis of distributed neural networks: the case of spatial neglect. Hum Brain Mapp. 2009;30:3687–95. Brott T, Adams HP, Olinger CP, Marler JR, Barsan WG, Biller J, Spilker J, Holleran R, Eberle R, Hertzberg V. Measurements of acute cerebral infarction: a clinical examination scale. Stroke. 1989;20:864–70. Thomalla G, Cheng B, Ebinger M, Hao Q, Tourdias T, Wu O, Kim JS, Breuer L, Singer OC, Warach S, Christensen S, Treszl A, Forkert ND, Galinovic I, Rosenkranz M, Engelhorn T, Köhrmann M, Endres M, Kang DW, Dousset V, Sorensen AG, Liebeskind DS, Fiebach JB, Fiehler J, Gerloff C. DWI-FLAIR mismatch for the identification of patients with acute ischaemic stroke within 4·5 h of symptom onset (PRE-FLAIR): a multicentre observational study. Lancet Neurol. 2011;10:978–86. Cheng B, Brinkmann M, Forkert ND, Treszl A, Ebinger M, Köhrmann M, Wu O, Kang DW, Liebeskind DS, Tourdias T, Singer OC, Christensen S, Luby M, Warach S, Fiehler J, Fiebach JB, Gerloff C, Thomalla G. Quantitative measurements of relative fluid-attenuated inversion recovery (FLAIR) signal intensities in acute stroke for the prediction of time from symptomonset. J Cereb Blood Flow Metab. 2013;33:76–84. Collins D, Holmes C, Peters T, Evans A. Automatic 3D model based neuroanatomical segmentation. Hum Brain Mapp. 1995;3:190–208. Shapley LS. Stochastic games. Proc Natl Acad Sci USA. 1953;39:1095–100. Cortes C, Vapnik V. Support-vector networks. Mach Learn. 1995;20:273–97. Chang CC, Lin CJ. LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol. 2013;2:1–27. Mirman D, Zhang Y, Wang Z, Coslett HB, Schwartz MF. The ins and outs of meaning: behavioral and neuroanatomical dissociation of semantically-driven word retrieval and multimodal semantic recognition in aphasia. Neuropsychologia. 2015;76:208–19. MZ, NF and CCH designed the study, MZ and NF performed data analysis, BC, GT and CG contributed data, MZ, NF and CCH wrote the manuscript and all authors commented critically on the results and on the manuscript. All authors read and approved the final manuscript. Christian Gerloff has received fees as a consultant or lecture fees from Bayer Vital, Boehringer Ingelheim, EBS technologies, Glaxo Smith Kline, Lundbeck, Pfizer, Sanofi Aventis, Silk Road Medical, and UCB. Götz Thomalla has received fees as a consultant or lecture fees from Covidien and Boehringer Ingelheim. The dataset supporting the conclusions of this article is included within the article (Fig. 1). PRE-FLAIR: The study was approved by the local ethics committees at all centres. Either written or verbal informed consent was obtained for all patients, as required by local legislation. Research was supported by the ERA-NET NEURON project "BEYONDVIS", BMBF 01EW1002, the SFB 936 "Multi-site Communication in the Brain" (Projects A1, C1, C2, Z3) as well as TRR 169 "Cross-Modal Learning", Project A2. Department of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Martinistraße 52, 20246, Hamburg, Germany Melissa Zavaglia, Nils D. Forkert & Claus C. Hilgetag School of Engineering and Science, Jacobs University Bremen, Campus Ring 1, 28759, Bremen, Germany Melissa Zavaglia Department of Radiology and Hotchkiss Brain Institute, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada Nils D. Forkert Department of Neurology, University Medical Center Eppendorf, Hamburg University, Martinistraße 52, 20246, Hamburg, Germany Bastian Cheng, Christian Gerloff & Götz Thomalla Department of Health Sciences, Boston University, 635 Commonwealth Ave., Boston, MA, 02215, USA Claus C. Hilgetag Bastian Cheng Christian Gerloff Götz Thomalla Correspondence to Melissa Zavaglia. Zavaglia, M., Forkert, N.D., Cheng, B. et al. Technical considerations of a game-theoretical approach for lesion symptom mapping. BMC Neurosci 17, 40 (2016). https://0-doi-org.brum.beds.ac.uk/10.1186/s12868-016-0275-6 DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12868-016-0275-6 Brain lesions Multi-perturbation Shapley value analysis (MSA) Lesion inference Functional prediction Computational and theoretical neuromodeling Submission enquiries: [email protected]
CommonCrawl
AMS Home Publications Membership Meetings & Conferences News & Public Outreach Notices of the AMS The Profession Programs Government Relations Education Giving to the AMS About the AMS MathSciNet® Member Directory Bookstore Journals Employment Services Giving to the AMS Bookstore MathSciNet® Meetings Journals Membership Employment Services Giving to the AMS About the AMS The AMS website will be down on Saturday December 11th from 8:30 am to approximately 11:30 am for maintenance. ISSN 1088-9485(online) ISSN 0273-0979(print) Journals Home Search My Subscriptions Subscribe Your device is paired with for another days. Previous issue | This issue | Most recent issue | All issues (1891–Present) | Next issue | Previous article | Articles in press | Recently published articles | Next article The Gauss-Bonnet theorem and the Tamagawa number Author: Takashi Ono Journal: Bull. Amer. Math. Soc. 71 (1965), 345-348 DOI: https://doi.org/10.1090/S0002-9904-1965-11290-3 MathSciNet review: 0176986 Full-text PDF Free Access References | Additional Information References [Enhancements On Off] (What's this?) Carl B. Allendoerfer and André Weil, The Gauss-Bonnet theorem for Riemannian polyhedra, Trans. Amer. Math. Soc. 53 (1943), 101–129. MR 7627, DOI https://doi.org/10.1090/S0002-9947-1943-0007627-9 Shiing-shen Chern, A simple intrinsic proof of the Gauss-Bonnet formula for closed Riemannian manifolds, Ann. of Math. (2) 45 (1944), 747–752. MR 11027, DOI https://doi.org/10.2307/1969302 C. Chevalley, Sur certains groupes simples, Tohoku Math. J. (2) 7 (1955), 14–66 (French). MR 73602, DOI https://doi.org/10.2748/tmj/1178245104 Sigurđur Helgason, Differential geometry and symmetric spaces, Pure and Applied Mathematics, Vol. XII, Academic Press, New York-London, 1962. MR 0145455 N. Iwahori and H. Matsumoto, On some Bruhat decomposition and the structure of the Hecke rings of ${\mathfrak p}$-adic Chevalley groups, Inst. Hautes Études Sci. Publ. Math. 25 (1965), 5–48. MR 185016 Ichirô Satake, The Gauss-Bonnet theorem for $V$-manifolds, J. Math. Soc. Japan 9 (1957), 464–492. MR 95520, DOI https://doi.org/10.2969/jmsj/00940464 Carl Ludwig Siegel, Symplectic geometry, Amer. J. Math. 65 (1943), 1–86. MR 8094, DOI https://doi.org/10.2307/2371774 André Weil, Adèles et groupes algébriques, Séminaire Bourbaki, Vol. 5, Soc. Math. France, Paris, 1995, pp. Exp. No. 186, 249–257 (French). MR 1603471 9. A. Weil, Adèles and algebraic groups, Lecture Notes, Institute for Advanced Study, Princeton, N. J., 1961. 1. C. B. Allendoerfer and A. Weil, The Gauss-Bonnet theorem for Riemannian polyhedra, Trans. Amer. Math. Soc. 53 (1943), 101-129. 2. S. S. Chern, A simple intrinsic proof of the Gauss-Bonnet formula for closed Riemannian manifolds, Ann. of Math. (2) 45 (1944), 747-752. 3. C. Chevalley, Sur certains groupes simples, Tôhoku Math. J. 7 (1955), 14-66. 4. S. Helgason, Differential geometry and symmetric spaces, Academic Press, New York, 1962. 5. N. Iwahori and H. Matsumoto, On some Bruhat decompositions and the structure of the Hecke rings of p-adic Chevalley groups (to appear). 6. I. Satake, The Gauss-Bonnet Theorem for V-manifolds, J. Math. Soc. Japan 9 (1957), 464-492. 7. C. L. Siegel, Symplectic geometry, Amer. J. Math. 65 (1943), 1-86. 8. A. Weil, Adèles et groupes algébriques, Séminaire Bourbaki, 1958/59, Exp. 186, Secrétariat Mathématique, Paris, 1959. 9. A. Weil, Adèles and algebraic groups, Lecture Notes, Institute for Advanced Study, Princeton, N. J., 1961. Join the AMS AMS Conferences News & Public Outreach Math in the Media Mathematical Imagery Mathematical Moments Data on the Profession Fellows of the AMS Mathematics Research Communities AMS Fellowships Collaborations and position statements Appropriations Process Primer Congressional briefings and exhibitions About the AMS Jobs at AMS Notices of the AMS · Bulletin of the AMS American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · 401-455-4000 or 800-321-4267 AMS, American Mathematical Society, the tri-colored AMS logo, and Advancing research, Creating connections, are trademarks and services marks of the American Mathematical Society and registered in the U.S. Patent and Trademark Office. © Copyright , American Mathematical Society · Privacy Statement · Terms of Use · Accessibility
CommonCrawl
Paclobutrazol Improves Sesame Yield by Increasing Dry Matter Accumulation and Reducing Seed Shattering Under Rainfed Conditions Muhammad Zeeshan Mehmood1, Ghulam Qadir1, Obaid Afzal1, Atta Mohi Ud Din2, Muhammad Ali Raza2, Imran Khan3, Muhammad Jawad Hassan3, Samrah Afzal Awan3, Shakeel Ahmad4, Muhammad Ansar1, Muhammad Aqeel Aslam1 & Mukhtar Ahmed ORCID: orcid.org/0000-0002-7223-55411,5 International Journal of Plant Production volume 15, pages 337–349 (2021)Cite this article Several biotic and abiotic stresses significantly decrease the biomass accumulation and seed yield of sesame crops under rainfed areas. However, plant growth regulators (such as Paclobutrazol) can improve the total dry matter and seed production of the sesame crop. The effects of the paclobutrazol application on dry matter accumulation and seed yield had not been studied before in sesame under rainfed conditions. Therefore, a two-year field study during 2018 and 2019 was conducted with key objectives to assess the impacts of paclobutrazol on leaf greenness, leaf area, total dry matter production and partitioning, seed shattering, and seed yield of sesame. Two sesame cultivars (TS-5 and TS-3) were treated with four paclobutrazol concentrations (P0 = Control, P1 = 100 mg L−1, P2 = 200 mg L−1, P3 = 300 mg L−1). The experiment was executed in RCBD-factorial design with three replications. Compared with P0, treatment P3 improved the leaf greenness of sesame by 17%, 38%, and 60% at 45, 85, and 125 days after sowing, respectively. However, P3 treatment decreased the leaf area of sesame by 14% and 20% at 45 and 85 days after sowing than P0, respectively. Compared with P0, treatment P3 increased the leaf area by 46% at 125 days after sowing. On average, treatment P3 also improved the total biomass production by 21% and partitioning in roots, stems, leaves, capsules, and seeds by 23%, 19%, 23%, 22%, and 40%, respectively, in the whole growing seasons as compared to P0. Moreover, under P3 treatment, sesame attained the highest seed yield and lowest seed shattering by 27% and 30%, respectively, compared to P0. This study indicated that by applying the paclobutrazol concentration at the rate of 300 mg L−1 in sesame, the leaf greenness, leaf areas, biomass accumulation, partitioning, seed yield, and shatter resistance could be improved. Thus, the optimum paclobutrazol level could enhance the dry matter accumulation and seed production capacity of sesame by decreasing shattering losses under rainfed conditions. Avoid the most common mistakes and prepare your manuscript for journal editors. Sesame (Sesamum indicum L.) is the major conventional oilseed crop, specially grown in marginal lands and drought-prone areas under rainfed conditions (Pathak et al. 2014). It is one of the high oil containing oilseed crops ranging from 50 to 60%, depending upon the variety (Raja et al. 2007; Wei et al. 2015). Sesame oil contains important antioxidants, i.e., sesamolin, and sesamol, which prevents its oil's rancidity (Rangkadilok et al. 2010), and its oil is a rich source of important unsaturated fatty acids, e.g., oleic acid (42%) and linolenic acid (35%) (Uzun et al. 2008). Additionally, the sesame meal contains ash (5.27%), fiber (6.22%), and carbohydrates (28.14%) that are highly nutritious for livestock (Raza et al. 2018). Several biotic and abiotic stresses adversely affect the yield components (i.e., capsule number per plant and seed number per capsule) of sesame, especially in rainfed regions (Jiang et al. 2009; Thornton et al. 2014). Among all these abiotic stresses, the high-temperature and drought are the most limiting factors, which negatively impact the growth and development of sesame in rainfed conditions (Ciaffi et al. 1996; Raza et al. 2018). However, selecting the suitable cultivars (e. g., drought-resistant varieties) and better agronomic management practices (e. g., appropriate sowing date) significantly increased the seed yield of crops (Raza et al. 2018). In addition to these constraints, seed shattering is another important factor that considerably decreases the sesame production. Shattering is referred to the seed loss from ruptured capsules before or during the harvesting. Several factors are responsible for shattering losses, such as internal or external stresses, contact among the plant parts or harvest machinery, and fluctuations in temperature, humidity, and capsule moisture (Kadkol et al. 1984). However, these losses can be reduced to a certain extent by selecting shatter-resistant cultivar or through some innovative agronomic management options such as using plant growth regulators (Kuai et al. 2015). Several plant growth regulators like paclobutrazol, mepiquat chloride, and chlorocholine chloride were used effectively to regulate plant growth and development (Kumar et al. 2012). On top of that, Paclobutrazol application has been reported in earlier studies to minimize shattering losses in shatter-prone crops (Tripathi et al. 2003; Rajala et al. 2002). Moreover, paclobutrazol was effectively used to enhance the productivity and manage seed shattering in Birds-foot-trefoil (Lotus corniculatus L.) (Wiggans et al. 1956) and canola (Brassica napus L.) (Kuai et al. 2015). Paclobutrazol is a triazole compound used to regulate the growth and physiological process in many plant species. Paclobutrazol regulates plants' growth and physiological functioning by interfering with sterol and gibberellic acid biosynthesis (Khalil and Rahman 1995; Khan 2009) that inhibits the oxidation of ent-kaurene to ent-kauronoic acid through inactivating cytochrome P-450 dependent oxygenase (Zhu et al. 2004; Rady and Gaballah 2012). Therefore, paclobutrazol could be used as stress protectants to regulate the plant water relations (such as capsule moisture) and shattering under stress conditions. Previous research findings have revealed that paclobutrazol application with appropriate concentration can significantly regulate the morphological and growth responses and improve the seed yield and shattering resistance of plants (Kuai et al. 2015; Zhou and Xi 1993; Armstrong and Nicol 1991; Baylis and Hutley‐Bull 1991). Conversely, paclobutrazol application at high rates significantly reduced the crop yields (Guoping et al. 2001; Peng et al. 2014). Hence, the effects of paclobutrazol application on plant growth characteristics, seed yield, and shatter resistance could be erratic since they do not only depend on the plant potential but also interlinked with several other factors such as weather conditions, management practices, and plant responsiveness (Scarisbrick et al. 1985; Oswalt et al. 2014). Thus, a comprehensive study was needed to determine paclobutrazol's optimum level for higher sesame production, especially under rainfed conditions. Therefore, a two-year field study was initiated to understand the responses of sesame to paclobutrazol application. The key objectives of this study were to; (1) investigate the impacts of paclobutrazol application on biomass accumulation, seed yield, and yield components of sesame, and (2) determine the optimum paclobutrazol level to increase seed shatter resistance in sesame under rainfed conditions. Research Site Description This field experiment was carried out at Koont-farm (33°07′10.9′′ N, 73°00′37.7′′ E, 520 m elevation), the research area of PMAS-Arid Agriculture University Rawalpindi, Province Punjab, Pakistan, during two growing seasons in 2018 and 2019, respectively. The climate of the research area falls under the dry sub-humid region with high rainfall. Weather data, including rainfall, maximum and minimum temperature of the experimental site for both growing seasons, are presented in Table 1. According to the world reference base for soil resources (2015) map, the soil of the experimental site falls under the category of "durisols, calcisols, gypsisols, solonchaks, solonetz" with pH 7.2, electrical conductivity 1.02 dSm−1, available nitrogen 0.28 g kg−1, available phosphorus 2.5 g kg−1, available potassium 95 g kg−1, organic matter 0.55%, saturation 34%, and bulk density 1.23 g cm−3 in the topsoil layer of 20 cm. Table 1 Monthly minimum temperature (Tmin), maximum temperature (Tmax), and rainfall during the growing seasons of 2018 and 2019 This experiment was conducted in RCBD-factorial design with three replications. The field study consisted of two sesame cultivars (TS-5 and TS-3) and four paclobutrazol concentrations (P0-Control, P1-100, P2-200, and P3-300 mg L−1). Paclobutrazol treatments were applied twice with the same dose at the pre-reproductive (35 days after sowing) and late-bloom (80 days after sowing) stages following the previously published phenological scale (Langham 2007). Each plot's size was 30 m2 (6 m length × 5 m wide), and the total area of the experimental plots was 720 m2 (30m2 × 24 plots). We chose the pre-reproductive stage, to regulate the plant height and number of branches during early vegetative growth while the late bloom-stage to improve the seed yield and crop shatter resistance at the maturity. Paclobutrazol in liquid form was evenly mixed with distilled water, and each paclobutrazol treatment was foliar applied with a rechargeable electric knapsack sprayer. Sesame was planted in the first week of July and harvested in the second week of November in both years. Sowing was performed with a single row hand-operated seed drill at the seeding depth of 2 cm, and the seed rate was applied at 5 kg ha−1. Row to row (R-R) and plant to plant (P-P) distance was maintained at 45 cm and 10 cm, respectively, by over-seeding at sowing. Then thinning was done after fifteen days of germination, which resulted in the planting density of 200,000 plants ha−1. All the other recommended practices were performed uniformly in all experimental units. The crop phenological stages were determined by using a previously described phenological scale (Langham 2007). The leaf greenness of sesame plants was measured from each applied treatment at 45, 85, and 125 days after sowing (DAS). The chlorophyll meter SPAD-502 (Konica Minolta, Japan) was used for measuring the leaf greenness from different points of sesame plant, and then average values of SPAD were recorded at each interval. Similarly, the leaf area of sesame plants from all the applied treatments was determined at 45, 85, and 125 DAS. For this purpose, five sesame plants at each interval were destructively sampled from each experimental plot. The following formula was then used to determine the leaf area of sesame (Silva et al. 2002). $$ {\text{AF }} = \, \left( {{\text{L }} \times {\text{ W}}} \right) \, \times {\text{f}} $$ where "AF" is the leaf area (cm2) of sesame, "L" and "W" is the length and width of leaf, whereas "f" is the correction factor, which is 0.70 for sesame. The biomass accumulation and partitioning in sesame plants were measured at 45, 85, and 125 DAS by destructive sampling. Fifteen sesame plants from each treatment were harvested manually, including the roots by carefully digging the soil, and roots were rinsed with water to remove adhering soil. Then all the plants were separated into roots, leaves, stems, capsules, and seeds. All the plant organs were oven-dried first at 105 °C for one hour to kill the fresh tissues, then at 70 °C to obtain the constant weight for total biomass accumulation and partitioning analysis (Raza et al. 2019). When more than 90% of capsules attained the mature capsule color, twenty representative plants from the two central rows of each sub-plot were harvested manually using a sickle; bundles were made and sundried for one week by keeping the plants in a vertical direction. After drying, plants were threshed manually to determine the seed yield and yield components (number of capsules plant−1, number of seeds capsule−1, and thousand seed weight) of sesame under the applied treatments. Twenty representative plants from each plot were selected, and the number of capsules plant−1 was counted, and then the average number of capsules plant−1 was calculated. For the number of seeds capsule−1, hundred capsules were randomly taken from each plot at harvesting, then capsules were threshed to count the number of seeds capsule−1, and the average was calculated. Similarly, for a thousand seed weight, three lots of thousand sources from bulk seed lot of each plot were oven-dried at 65 °C to attain the constant weight. Thousand seed weight was measured using electrical balance, and the average was calculated. Seed yield was determined by manually threshing the sun-dried bundles from each plot and then converted to kg ha−1. The sesame seed shatter resistance under the studied treatments was measured by comparing the seed losses among shattered and non-shattered capsules. Hundred non-shattered and hundred shattered capsules were clipped from the plants of each treatment. Capsules were threshed manually to obtain seed weight and compared to calculate the shattering losses and shattering percentage for sesame under different paclobutrazol treatments (Gan et al. 2008). Statistics Analysis Statistical analysis was performed with Statistix 8.1 (V.8.1, Statistix, USA). Significant differences among cultivars and paclobutrazol treatments were computed using a two-way Analysis of Variance (ANOVA) technique combined with the least significant difference (LSD) Test. The significance of the difference between means was evaluated at a 5% probability level (p < 0.05). The Shapiro–Wilk and Kolmogorov–Smirnov normality tests were conducted to confirm that the data can be subjected to analysis of regression (Montgomery 2017). Afterwards linear regression analysis was conducted between paclobutrazol doses and grain yield to see impacts of treatments. Leaf Greenness The paclobutrazol application significantly enhanced the leaf greenness of the sesame plant at 45, 85, and 125 DAS. Treatment P3 had the greatest influence on the leaf greenness of sesame plants compared to control (Fig. 1). For instance, the average across the years, treatment P3 (300 mg L−1) increased the leaf greenness of sesame plants by 17%, 38%, and 60% at 45, 85, and 125 DAS, respectively, than the control treatment, indicating that the paclobutrazol application can significantly improve the leaf greenness of the sesame plant. Leaf greenness of sesame at 45, 85, and 125 days after sowing (DAS) in 2018 and 2019. The P0, P1, P2, and P3 represent the paclobutrazol treatments, Control, 100 mg L−1, 200 mg L−1, and 300 mg L−1, respectively. Means are averaged over three replicates Leaf Area The paclobutrazol application significantly (p < 0.05) influenced the leaf area of the sesame plant at 45, 85, and 125 DAS in both cultivars, except the non-significant difference, was observed between cultivars at 45 DAS in 2018 and 2019, respectively. Overall, compared to control, paclobutrazol concentration P3 (300 mg L−1) had the greatest impact on the leaf area of sesame at all intervals. On average, over the years, in comparison to control, paclobutrazol application P3 (300 mg L−1) decreased the leaf area of sesame by 14% and 25% at 45 and 85 DAS, respectively. However, P3 (300 mg L−1) treatment showed a higher leaf area by 46% compared to control at 125 DAS (Fig. 2). In this two-year experiment, results suggested that leaf area development in sesame was directly associated with paclobutrazol concentrations. Leaf area of sesame plants at 45, 85, and 125 days after sowing (DAS) in 2018 and 2019. The P0, P1, P2, and P3 represent the paclobutrazol treatments, Control, 100 mg L−1, 200 mg L−1, and 300 mg L−1, respectively. Means are averaged over three replicates Biomass Accumulation Different paclobutrazol concentrations had a significant (p < 0.05) impact on total biomass accumulation (TBA) of sesame (Table 2). Overall, across the years, P3 (300 mg L−1) produced the highest biomass of sesame (388.2, 1029.8 and 898.2 g m−2 in 2018 and 401.9, 1076.5 and 972.4 g m−2 in 2019) than the control (323.9, 845.5 and 715.9 g m−2 in 2018 and 346.5, 907.5 and 783.1 g m−2 in 2019) at 45, 85, and 125 DAS, respectively. Paclobutrazol concentration P3 (300 mg L−1) increased the total biomass accumulation of sesame plant by 20%, 16%, and 22% in 2018, and 19%, 25%, and 24% in 2019, at 45, 85, and 125 DAS, respectively. Table 2 Effects of paclobutrazol application on total biomass accumulation of sesame in 2018 and 2019 Biomass Partitioning Paclobutrazol application with various concentrations significantly altered the biomass partitioning patterns in various plant organs of the sesame. Overall, in all the treatments, P0 depicted the lowest biomass gains in the different plant parts at 45, 85, and 125 DAS (Fig. 3). Seed biomass of sesame at maturity was highest (137.0 and 145.9 g m−2) in P3 (300 mg L−1) treatment and lowest (109.3 and 117.3 g m−2) in P0 treatment in both years, respectively. On average, across both years, P3 (300 mg L−1) was the only treatment that increased the root (by 26%, 19%, and 23%), stem (by 16%, 17%, and 23%), and leaves (by 17%, 16%, and 34%) biomass at 45, 85 and 125 DAS while capsule hull (by 23% and 22%) and seed (by 55% and 25%) biomass at 85 and 125 DAS respectively as compared to control. Biomass partitioning of sesame plants at 45, 85, and 125 days after sowing (DAS) in 2018 and 2019. The P0, P1, P2, and P3 represent the paclobutrazol treatments, Control, 100 mg L−1, 200 mg L−1, and 300 mg L−1, respectively. Means are averaged over three replicates Yield Components Yield components of sesame were significantly affected by various application concentrations of paclobutrazol (Table 3). The number of capsules plant−1, number of seeds capsule−1, and thousand seed weight were altered considerably in P3 treatment relative to other application concentrations. Specifically, the mean maximum values of the number of capsules plant−1 (33.0 and 33.7) and thousand seed weight (3.67 and 3.74 g) were observed in P3 (300 mg L−1). In contrast, the maximum average seeds capsule−1 (63.8 and 64.3) was observed in control during 2018 and 2019, respectively. Furthermore, P3 (300 mg L−1) treatment increased the number of capsules plant−1 by 26% and 25% and thousand seed weight by 10% and 11% compared to control in both years, respectively. Table 3 Effects of paclobutrazol application on yield components and seed yield of sesame in 2018 and 2019 Seed Yield Sesame seed yield was significantly (p < 0.05) higher in P3 (300 mg L−1) treatment (1379.1 and 1435.5 kg ha−1) compared with P2 (1303.4 and 1384.3 kg ha−1), P1 (1208.8 and 1263.1 kg ha−1), and P0 (1083.2 and 1141.8 kg ha−1) in 2018 and 2019 respectively (Table 3). Importantly in paclobutrazol treatments, across the years, P3 (300 mg L−1) was the only treatment that enhanced the seed yield of sesame by 27% and 26% in comparison with control. The seed yield of sesame increased linearly in response to paclobutrazol application (Fig. 4). Furthermore, TS-5 had a higher seed yield (1302.7 and 1368.7 kg ha−1) than the TS-3 (1184.5 and 1243.6 kg ha−1) in both years. Overall, TS-5 showed the average greater seed yield by 10% than the TS-3 in both growing seasons (Table 3). Linear regression analysis for paclobutrazol vs seed yield of sesame. The 1.0, 2.0, 3.0, and 4.0 represent the paclobutrazol treatments, Control, 100 mg L−1, 200 mg L−1, and 300 mg L−1, respectively Seed Shattering Losses and Percentage Sesame seed shattering losses were significantly reduced with the paclobutrazol application (Fig. 5). Among the paclobutrazol concentrations, sesame plants in P3 (300 mg L−1) treatment had the lowest (20.50 and 19.57 g m−2) shattering losses than the control (24.65 and 24.02 g m−2). Paclobutrazol treatment P3 (300 mg L−1), decreased the shattering losses in sesame by 17% and 19% in 2018 and 2019, respectively. Similarly, the lowest shattering percentage was recorded in P3 (300 mg L−1) treatment (13.1 and 12.2) as compared with control (18.6 and 17.5) (Fig. 6). However, between the cultivars, TS-5 had the lower seed yield losses and percentage compared to TS-3. On average, across the years, TS-5 had the lower seed yield losses by 2% and shattering percentage by 9% compared to TS-3. Shattering losses in 2018 and 2019. The P0, P1, P2, and P3 represent the paclobutrazol treatments, Control, 100 mg L−1, 200 mg L−1, and 300 mg L−1, respectively. Means are averaged over three replicates Shattering percentage in 2018 and 2019. The P0, P1, P2, and P3 represent the paclobutrazol treatments, Control, 100 mg L−1, 200 mg L−1, and 300 mg L−1, respectively. Means are averaged over three replicates Linear Regression Analysis and Normality Test The results of linear regression analysis showed significant relationship between paclobutrazol application and seed yield of sesame (Fig. 4). The equation obtained using Shapiro–Wilk and Kolmogorov–Smirnov normality test was: $$ Grain yield = 1026.815 + 99.231 \times paclobutrazol concentrations $$ The coefficient of determination (R2) of linear model was 0.453 while obtained adjusted R2 was 0.441. The validation of model can be confirmed by the F test as presented in Table 4. Table 4 Linear regression analysis between paclobutrazol application and seed yield of sesame using Normality Test (Shapiro–Wilk) Our findings demonstrated significant improvements in the leaf greenness of sesame plants under increasing paclobutrazol concentrations. A previous study reported that the application of triazoles significantly increased the chlorophyll contents of soybean leaves by regulating the expression of key enzymes involved in the biosynthesis of chlorophyll (Liu et al. 2015). Moreover, evidence suggested that application of triazoles increase the chlorophyll contents, photosynthetic rate of leaves, and delay leaf senescence in plants (Yan et al. 2015). Results of this study were in line with earlier studies where plant growth regulators especially triazoles enhanced the chlorophyll synthesis and delayed leaf senescence in different crops (Liu et al. 2015; Wang et al. 2009; Ahmad et al. 2019). Leaf area of sesame was significantly reduced at higher concentrations that indicate the growth retarding effects of paclobutrazol on plants (Soumya et al. 2017; Pal et al. 2016). Specifically, paclobutrazol interferes with gibberellic acid synthesis by impairing the oxidation of ent-kaurene to ent-kauronoic acid by inactivating cytochrome P450-dependent oxygenase (Zhu et al. 2004; Rady and Gaballah 2012). However, at maturity, the leaf area was significantly higher in P3 (300 mg L−1) treatment linked with delayed leaf senescence and stayed green leaf characteristics, as paclobutrazol treated plants have darker green foliage and more chlorophyll content (Jiang et al. 2019; Tesfahun 2018). It is well established that delayed leaf senescence could enhance the photosynthates formation and mobilization from leaves towards the reproductive tissues of the plant, which could potentially improve the seed yield (Feng et al. 2019; Gregersen 2011). Additionally, it could promote the nutrients remobilization from senescing leaves to developing organs (Joshi et al. 2019). Thus, optimization of paclobutrazol concentration can provide an opportunity for yield improvements due to the better source-sink relationship and delayed leaf senescence (Ahmad et al. 2019; Wang et al. 2009; Upadhyaya et al. 1985). The current experiment demonstrates that the exogenous application of paclobutrazol significantly altered the total biomass accumulation and partitioning patterns in sesame. Moreover, our results revealed that total biomass accumulation was highest in P3 (300 mg L−1) treatment than non-treated plants. Enhanced nutrient and water translocation within the plants under paclobutrazol application increase the biomass production of plants (Kamran et al. 2018; Kuai et al. 2015). Furthermore, P3 (300 mg L−1) treatment changed the biomass partitioning in plant organs and the root and seed biomass of sesame. On average, during both years, it increased the root and seed biomass (by 23% and 25%, respectively) compared to control at maturity. Changes in biomass partitioning (especially in seeds) might be associated with better root architecture, which enhanced that nutrient and water translocation in the plant (Kamran et al. 2018). However, an increase in root biomass was interlinked with enhanced row formation in cortical cells due to plant growth regulators (Fletcher et al. 2000; Barnes et al. 1989; Burrows et al. 1992). Plant treatment with triazoles increases the root extension, radical cell expansion (Wang and Li 1992), and association with larger parenchyma cells (Fletcher et al. 2000), which results in increased root biomass in comparison with non-treated plants (Qi et al. 2012; Kamran et al. 2018). In past reports, a similar trend was reported for root biomass in maize (Wan-rong et al. 2014), wheat (Hajihashemi et al. 2007), and soybean (Yan et al. 2010) under the paclobutrazol application. Seed yield differences under different levels of the paclobutrazol application were also evaluated in this study. Sesame yield increased linearly in response to the doses of paclobutrazol application. Seed yield of oilseed crops is determined by different yield contributing traits, such as the number of capsules, number of seeds per capsule, and seed weight (Wang et al. 2011). In the current study, different paclobutrazol levels caused significant variations in these yield components of sesame. It is well understood and reported that paclobutrazol at specific concentrations promotes flower initiation (Wilkinson and Richards 1987), flower bud formation (Blanco 1988; Kaska et al. 1991), and economic yield (Kuai et al. 2015; Kamran et al. 2018) in several plant species. Consistently, an increase in the number of capsules under different paclobutrazol treatments in sesame may be due to the stimulation of flower bud formation and flower initiation. Similarly, paclobutrazol increases the average fruit size in Crimson Gold (Blanco 1988) and seed weight in canola and maize, while it has the tendency to reduce the seeds number per capsule at higher application concentrations (Kamran et al. 2018; Kuai et al. 2015). Reduced seeds per capsule were observed under studied paclobutrazol treatments, while greatest reduction was recorded in P3 (300 mg L−1) treatment. However, decreased seeds number per capsule helped to relieve the competition for nutrient allocation in seeds (Nahar and Ikeda 2002), which consequently increased the thousand seed weight and seed yield in sesame. Moreover, seed yield and crop growth are closely linked with root architecture, because it determines the acquisition, uptake, and utilization of mineral nutrients and water (Qi et al. 2012). Hence, in rainfed farming conditions, where ample supply of water and nutrient is not possible, improved root architecture under paclobutrazol application will enhance plant access to non-uniformly distributed nutrients and water, that could increase the crops yield (Zhang et al. 2009; Kamran et al. 2018). The impact of different paclobutrazol levels on shattering losses was investigated, and the findings of this experiment demonstrated that seed shattering was significantly reduced under P3 (300 mg L−1) treatment in both years. The possible increase in shattering resistance may be attributed to paclobutrazol effects on capsule maturity (Gan et al. 2008), capsule wall thickness (Child et al. 2003), capsule dry weight, and water content (Kuai et al. 2015). In past investigations, plant growth regulators and different kinds of desiccants were successfully used to control the vegetative growth and shattering losses in various crops (Wiggans et al. 1956; Kuai et al. 2015). Metcalfe et al. (1957) reported that the water content of the capsule is a critical determining factor for shattering losses and could facilitate in improving the shattering resistance of the crops. However, mechanisms involved in changing the structure, biochemistry, and shatter resistance under paclobutrazol application, still lack a comprehensive understanding and need further investigation (Kuai et al. 2015). Limitations and Implications During this study, we noted the following limitations during sampling and measurements. The method used to determine leaf area is not comprehensive and accurate because sesame has large leaf shape variations. Still, the equation only considers a single correction factor for all of them. Therefore, results may not reflect the considerable variation in sesame leaves. Moreover, the method used for seed shattering measurement was reported earlier to assess the seed shattering in sesame, but it also had some limitations. For instance, this method does not assume the variations in seed weight of different capsules, which can greatly vary. Additionally, the paclobutrazol can significantly influence the size and weight of capsules in the top, middle, and bottom of the plant. Thus, these limitations can affect the results obtained using these methods. This study highlights the paclobutrazol's effects on leaf greenness, leaf area, biomass accumulation and distribution, seed yield, and seed shattering. Thus, we present the following implications for improvement in sesame production. Our results suggested that paclobutrazol application can enhance the seed yield and reduce the seed shattering in sesame. Additionally, these findings could also be used for developing the shattering resistance in sesame, which will favor the mechanical harvesting of sesame that will improve the farming system efficiency and net returns. Furthermore, to the best of our knowledge, this research is the first to report the effects of paclobutrazol application on dry matter accumulation, seed yield, and shatter resistance in sesame under rainfed conditions. Our study revealed that different paclobutrazol concentrations significantly impacted the leaf greenness, leaf area, and leaf senescence of sesame. Similarly, the total biomass accumulation and partitioning of sesame significantly improved under the application of paclobutrazol, which ultimately increased the total seed biomass. Hence, improved leaf greenness and leaf area may have delayed the leaf senescence and improved dry matter accumulation, which finally enhanced the sesame's final seed yield. Moreover, paclobutrazol application significantly reduced the shattering losses. Overall, the paclobutrazol concentration of 300 mg L−1 produced the highest seed yield and lowest seed shattering. Thus, our two-year field study results suggest that sesame yield and shatter resistance in sesame could be enhanced by applying optimum paclobutrazol level. Ahmad, I., Kamran, M., Su, W., Haiqi, W., Ali, S., Bilegjargal, B., et al. (2019). Application of uniconazole improves photosynthetic efficiency of maize by enhancing the antioxidant defense mechanism and delaying leaf senescence in semiarid regions. Journal of Plant Growth Regulation, 38(3), 855–869. https://doi.org/10.1007/s00344-018-9897-5. Armstrong, E., & Nicol, H. (1991). Reducing height and lodging in rapeseed with growth regulators. Australian Journal of Experimental Agriculture, 31(2), 245–250. https://doi.org/10.1071/EA9910245. Barnes, A., Walser, R., & Davis, T. (1989). Anatomy of Zea mays and Glycine max seedlings treated with triazole plant growth regulators. Biologia Plantarum, 31(5), 370–375. https://doi.org/10.1007/BF02876355. Baylis, A., & Hutley-Bull, P. (1991). The effects of a paclobutrazol-based growth regulator on the yield, quality and ease of management of oilseed rape. Annals of Applied Biology, 118(2), 445–452. https://doi.org/10.1111/j.1744-7348.1991.tb05645.x. Blanco, A. (1988). Control of shoot growth of peach and nectarine trees with paclobutrazol. Journal of Horticultural Science, 63(2), 201–207. https://doi.org/10.1080/14620316.1988.11515848. Burrows, G., Boag, T., & Stewart, W. (1992). Changes in leaf, stem, and root anatomy of Chrysanthemum cv. Lillian Hoek following paclobutrazol application. Journal of Plant Growth Regulation, 11(4), 189. https://doi.org/10.1007/BF02115476. Child, R., Summers, J., Babij, J., Farrent, J., & Bruce, D. (2003). Increased resistance to pod shatter is associated with changes in the vascular structure in pods of a resynthesized Brassica napus line. Journal of Experimental Botany, 54(389), 1919–1930. https://doi.org/10.1093/jxb/erg209. Ciaffi, M., Tozzi, L., Borghi, B., Corbellini, M., & Lafiandra, D. (1996). Effect of heat shock during grain filling on the gluten protein composition of bread wheat. Journal of Cereal Science, 24(2), 91–100. https://doi.org/10.1006/jcrs.1996.0042. Feng, L., Raza, M. A., Li, Z., Chen, Y., Khalid, M. H. B., Du, J., et al. (2019). The influence of light intensity and leaf movement on photosynthesis characteristics and carbon balance of soybean. Frontiers in Plant Science, 9, 1952. https://doi.org/10.3389/fpls.2018.01952. Fletcher, R. A., Gilley, A., Sankhla, N., & Davis, T. D. (2000). Triazoles as plant growth regulators and stress protectants. Horticultural Reviews, 24, 55–138. https://doi.org/10.1002/9780470650776.ch3. Gan, Y., Malhi, S., Brandt, S., & McDonald, C. (2008). Assessment of seed shattering resistance and yield loss in five oilseed crops. Canadian Journal of Plant Science, 88(1), 267–270. https://doi.org/10.4141/CJPS07028. Gregersen, P. L. (2011). Senescence and nutrient remobilization in crop plants. The Molecular and Physiological Basis of Nutrient Use Efficiency in Crops. https://doi.org/10.1002/9780470960707. Guoping, Z., Jianxing, C., & Bull, D. A. (2001). The effects of timing of N application and plant growth regulators on morphogenesis and yield formation in wheat. Plant Growth Regulation, 35(3), 239–245. https://doi.org/10.1023/A:1014411316780. Hajihashemi, S., Kiarostami, K., Saboora, A., & Enteshari, S. (2007). Exogenously applied paclobutrazol modulates growth in salt-stressed wheat plants. Plant Growth Regulation, 53(2), 117–128. https://doi.org/10.1007/s10725-007-9209-8. Jiang, D., Yue, H., Wollenweber, B., Tan, W., Mu, H., Bo, Y., et al. (2009). Effects of post-anthesis drought and waterlogging on accumulation of high-molecular-weight glutenin subunits and glutenin macropolymers content in wheat grain. Journal of Agronomy and Crop Science, 195(2), 89–97. https://doi.org/10.1111/j.1439-037X.2008.00353.x. Jiang, X., Wang, Y., Xie, H., Li, R., Wei, J., & Liu, Y. (2019). Environmental behavior of paclobutrazol in soil and its toxicity on potato and taro plants. Environmental Science and Pollution Research, 26(26), 27385–27395. https://doi.org/10.1007/s11356-019-05947-9. Joshi, S., Choukimath, A., Isenegger, D., Panozzo, J., Spangenberg, G., & Kant, S. (2019). Improved wheat growth and yield by delayed leaf senescence using developmentally regulated expression of a cytokinin biosynthesis gene. Frontiers in Plant Science, 10, 1285. https://doi.org/10.3389/fpls.2019.01285. Kadkol, G., Macmillan, R., Burrow, R., & Halloran, G. (1984). Evaluation of Brassica genotypes for resistance to shatter. I. Development of a laboratory test. Euphytica, 33(1), 63–73. https://doi.org/10.1007/BF00022751. Kamran, M., Wennan, S., Ahmad, I., Xiangping, M., Wenwen, C., et al. (2018). Application of paclobutrazol affect maize grain yield by regulating root morphological and physiological characteristics under a semi-arid region. Scientific Reports, 8(1), 1–15. https://doi.org/10.1038/s41598-018-23166-z. Kaska, N., Kuden, A., & Kuden, A. (1991). Physiological effects of PP333 (paclobutrazol) on golden delicious apple cultivars. Journal of Cairo University Agricultural Faculty, 5(4), 87–94. https://doi.org/10.1007/s00344-013-9353-5. Khalil, I. A., & Rahman, H.-U. (1995). Effect of paclobutrazol on growth, chloroplast pigments and sterol biosynthesis of maize (Zea mays L.). Plant Science, 105(1), 15–21. https://doi.org/10.1016/0168-9452(94)04028-F. Khan, M. (2009). Sterol biosynthesis inhibition by paclobutrazol induces greater aluminum (Al) sensitivity in AI-tolerant rice. American Journal of Plant Physiology, 4(3), 89–99. https://doi.org/10.3923/ajpp.2009.89.99. Kuai, J., Yang, Y., Sun, Y., Zhou, G., Zuo, Q., Wu, J., et al. (2015). Paclobutrazol increases canola seed yield by enhancing lodging and pod shatter resistance in Brassica napus L. Field Crops Research, 180, 10–20. https://doi.org/10.1016/j.fcr.2015.05.004. Kumar-Ghatty, S., Satyanarayana, J., Guha, A., Chaitanya, B., & Reddy, A. R. (2012). Paclobutrazol treatment as a potential strategy for higher seed and oil yield in field-grown Camelina sativa L. Crantz. BMC Research Notes, 5(1), 137. https://doi.org/10.1186/1756-0500-5-137. Langham, D. R. (2007). Phenology of sesame. In A. S. H. S. Press (Ed.), Issues in New Crops and New Uses, Janick & Whipkey (pp. 144–182). Alexandria: ASHS Press. Liu, Y., Fang, Y., Huang, M., Jin, Y., Sun, J., Tao, X., et al. (2015). Uniconazole-induced starch accumulation in the bioenergy crop duckweed (Landoltia punctata) II: Transcriptome alterations of pathways involved in carbohydrate metabolism and endogenous hormone crosstalk. Biotechnology for Biofuels, 8(1), 1–12. https://doi.org/10.1186/s13068-015-0245-8. Metcalfe, D., Johnson, I., & Shaw, R. (1957). The relation between pod dehiscence, relative humidity and moisture equilibrium in birdsfoot trefoil, lotus corniculatus 1. Agronomy Journal, 49(3), 130–134. https://doi.org/10.2134/agronj1957.00021962004900030006x. Montgomery, D. C. (2017). Design and Analysis of Experiments. Hoboken: Wiley. Nahar, B. S., & Ikeda, T. (2002). Effect of different concentrations of Figaron on production and abscission of reproductive organs, growth, and yield in soybean (Glycine max L.). Field Crops Research, 78(1), 41–50. https://doi.org/10.1016/S0378-4290(02)00086-2. Oswalt, J., Rieff, J. M., Severino, L. S., Auld, D. L., Bednarz, C. W., & Ritchie, G. L. (2014). Plant height and seed yield of castor (Ricinus communis L.) sprayed with growth retardants and harvest aid chemicals. Industrial Crops and Products, 61, 272–277. https://doi.org/10.1016/j.indcrop.2014.07.006. Pal, S., Zhao, J., Khan, A., Yadav, N. S., Batushansky, A., Barak, S., et al. (2016). Paclobutrazol induces tolerance in tomato to deficit irrigation through diversified effects on plant morphology, physiology and metabolism. Scientific Reports, 6, 39321. https://doi.org/10.1038/srep39321. Pathak, N., Rai, A. K., Kumari, R., Thapa, A., & Bhat, K. V. (2014). Sesame crop: An underexploited oilseed holds tremendous potential for enhanced food value. Agricultural Sciences, 2014(5), 519–529. https://doi.org/10.4236/as.2014.56054. Peng, D., Chen, X., Yin, Y., Lu, K., Yang, W., Tang, Y., et al. (2014). Lodging resistance of winter wheat (Triticum aestivum L.): Lignin accumulation and its related enzymes activities due to the application of paclobutrazol or gibberellin acid. Field Crops Research, 157, 1–7. https://doi.org/10.1016/j.fcr.2013.11.015. Qi, W.-Z., Liu, H.-H., Liu, P., Dong, S.-T., Zhao, B.-Q., So, H. B., et al. (2012). Morphological and physiological characteristics of corn (Zea mays L.) roots from cultivars with different yield potentials. European Journal of agronomy, 38, 54–63. https://doi.org/10.1016/j.eja.2011.12.003. Rady, M. M., & Gaballah, M. S. (2012). Improving barley yield grown under water stress conditions. Research Journal of Recent Sciences, 1(6), 1–6. Raja, A., Hattab, K. O., Gurusamy, L., & Suganya, S. (2007). Sulphur levels on nutrient uptake and yield of sesame varieties and nutrient availability. International Journal of Soil Science, 2(4), 278–285. https://doi.org/10.3923/ijss.2007.278.285. Rajala, A., Peltonen-Sainio, P., Onnela, M., & Jackson, M. (2002). Effects of applying stem-shortening plant growth regulators to leaves on root elongation by seedlings of wheat, oat and barley: Mediation by ethylene. Plant Growth Regulation, 38(1), 51–59. https://doi.org/10.1023/A:1020924307455. Rangkadilok, N., Pholphana, N., Mahidol, C., Wongyai, W., Saengsooksree, K., Nookabkaew, S., et al. (2010). Variation of sesamin, sesamolin and tocopherols in sesame (Sesamum indicum L.) seeds and oil products in Thailand. Food Chemistry, 122(3), 724–730. https://doi.org/10.1016/j.foodchem.2010.03.044. Raza, M. A., Feng, L. Y., Manaf, A., Wasaya, A., Ansar, M., Hussain, A., et al. (2018). Sulphur application increases seed yield and oil content in sesame seeds under rainfed conditions. Field Crops Research, 218, 51–58. https://doi.org/10.1016/j.fcr.2017.12.024. Raza, M. A., Feng, L. Y., van der Werf, W., Iqbal, N., Khan, I., Hassan, M. J., et al. (2019). Optimum leaf defoliation: A new agronomic approach for increasing nutrient uptake and land equivalent ratio of maize soybean relay intercropping system. Field Crops Research, 244, 107647. https://doi.org/10.1016/j.fcr.2019.107647. Global Soil Map. (2015). https://www.isric.org/sites/default/files/WRBSoilMap.pdf. Accessed 1 Jul 2021. Scarisbrick, D., Addo-Quaye, A., Daniels, R., & Mahamud, S. (1985). The effect of paclobutrazol on plant height and seed yield of oil-seed rape (Brassica napus L.). The Journal of Agricultural Science, 105(3), 605–612. https://doi.org/10.1017/S0021859600059517. Silva, L., Santos, J., Vieira, D., Beltrão, N., Alves, I., & Jerônimo, J. (2002). Simple method to estimate leaf area of sesame plants (Sesamum indicum L.). Brazilian Journal of Oilseeds and Fibrous, 6, 491–495. Soumya, P., Kumar, P., & Pal, M. (2017). Paclobutrazol: a novel plant growth regulator and multi-stress ameliorant. Indian Journal of Plant Physiology, 22(3), 267–278. https://doi.org/10.1007/s40502-017-0316-x. Tesfahun, W. (2018). A review on: Response of crops to paclobutrazol application. Cogent Food & Agriculture, 4(1), 1525169. https://doi.org/10.1080/23311932.2018.1525169. Thornton, P. K., Ericksen, P. J., Herrero, M., & Challinor, A. J. (2014). Climate variability and vulnerability to climate change: A review. Global Change Biology, 20(11), 3313–3328. https://doi.org/10.1111/gcb.12581. Tripathi-Sayre, K., Kaul, J., & Narang, R. (2003). Growth and morphology of spring wheat (Triticum aestivum L.) culms and their association with lodging: Effects of genotypes, N levels and ethephon. Field Crops Research, 84(3), 271–290. https://doi.org/10.1016/S0378-4290(03)00095-9. Upadhyaya, A., Sankhla, D., Davis, T. D., Sankhla, N., & Smith, B. (1985). Effect of paclobutrazol on the activities of some enzymes of activated oxygen metabolism and lipid peroxidation in senescing soybean leaves. Journal of Plant Physiology, 121(5), 453–461. https://doi.org/10.1016/S0176-1617(85)80081-X. Uzun, B., Arslan, Ç., & Furat, Ş. (2008). Variation in fatty acid compositions, oil content and oil yield in a germplasm collection of sesame (Sesamum indicum L.). Journal of the American Oil Chemists' Society, 85(12), 1135–1142. https://doi.org/10.1007/s11746-008-1304-0. Wan-rong, G., Yao, M., Jun-bao, Z., Biao, J., Yong-chao, W., Jing, L., et al. (2014). Regulation of foliar application DCPTA on growth and development of maize seedling leaves in Heilongjiang Province. Journal of Northeast Agricultural University (English Edition), 21(2), 1–11. https://doi.org/10.1016/S1006-8104(14)60028-3. Wang, L.-H., & Li, C.-H. (1992). The effect of paclobutrazol on physiological and biochemical changes in the primary roots of pea. Journal of Experimental Botany, 43(10), 1367–1372. https://doi.org/10.1093/jxb/43.10.1367. Wang, X., Mathieu, A., Cournède, P.-H., Allirand, J.-M., Jullien, A., de Reffye, P., et al. (2011). Variability and regulation of the number of ovules, seeds and pods according to assimilate availability in winter oilseed rape (Brassica napus L.). Field Crops Research, 122(1), 60–69. https://doi.org/10.1016/j.fcr.2011.02.008. Wang, X., Yang, W., Chen, G., Li, Q., & Wang, X. (2009). Effects of spraying uniconazole on leaf senescence and yield of maize at late growth stage. Journal of Maize Sciences, 17(1), 86–88. Wei, X., Liu, K., Zhang, Y., Feng, Q., Wang, L., Zhao, Y., et al. (2015). Genetic discovery for oil production and quality in sesame. Nature Communications, 6, 8609. https://doi.org/10.1038/ncomms9609. Wiggans, S., Metcalfe, D., & Thompson, H. (1956). The use of desiccant sprays in harvesting birdsfoot trefoil for seed. Agronomy Journal, 48(7), 281–284. https://doi.org/10.2134/agronj1956.00021962004800070001x. Wilkinson, R., & Richards, D. (1987). Effects of paclobutrazol on growth and flowering of Bouvardia humboldtii. HortScience, 22(3), 444–445. Yan, Y., Gong, W., Yang, W., Wan, Y., Chen, X., Chen, Z., et al. (2010). Seed treatment with uniconazole powder improves soybean seedling growth under shading by corn in relay strip intercropping system. Plant Production Science, 13(4), 367–374. https://doi.org/10.1626/pps.13.367. Yan, Y., Wan, Y., Liu, W., Wang, X., Yong, T., Yang, W., et al. (2015). Influence of seed treatment with uniconazole powder on soybean growth, photosynthesis, dry matter accumulation after flowering and yield in relay strip intercropping system. Plant Production Science, 18(3), 295–301. https://doi.org/10.1626/pps.18.295. Zhang, X., Chen, S., Sun, H., Wang, Y., & Shao, L. (2009). Root size, distribution and soil water depletion as affected by cultivars and environmental factors. Field Crops Research, 114(1), 75–83. https://doi.org/10.1016/j.fcr.2009.07.006. Zhou, W., & Xi, H. (1993). Effects of mixtalol and paclobutrazol on photosynthesis and yield of rape (Brassica napus). Journal of Plant Growth Regulation, 12(3), 157–161. https://doi.org/10.1007/BF00189647. Zhu, L.-H., van de Peppel, A., Li, X.-Y., & Welander, M. (2004). Changes of leaf water potential and endogenous cytokinins in young apple trees treated with or without paclobutrazol under drought conditions. Scientia Horticulturae, 99(2), 133–141. https://doi.org/10.1016/S0304-4238(03)00089-X. Open access funding provided by Swedish University of Agricultural Sciences.. This research was supported by the Pakistan Agricultural Research Council (PARC) under the Agricultural Linkages Program (ALP) on "Research for Productivity Enhancement of Drought Tolerant and Shattering Resistant Cultivars of Sesame in Rainfed Areas of Punjab Pakistan". Department of Agronomy, PMAS-Arid Agriculture University, Rawalpindi, 46300, Pakistan Muhammad Zeeshan Mehmood, Ghulam Qadir, Obaid Afzal, Muhammad Ansar, Muhammad Aqeel Aslam & Mukhtar Ahmed College of Agronomy, Sichuan Agricultural University, Chengdu, 611130, China Atta Mohi Ud Din & Muhammad Ali Raza Department of Grassland Science, Animal Science and Technology College, Sichuan Agricultural University, Chengdu, 611130, China Imran Khan, Muhammad Jawad Hassan & Samrah Afzal Awan Department of Agronomy, Bahauddin Zakariya University, Multan, 60800, Pakistan Shakeel Ahmad Department of Agricultural Research for Northern Sweden, Swedish University of Agricultural Sciences, 90183, Umeå, Sweden Mukhtar Ahmed Muhammad Zeeshan Mehmood Ghulam Qadir Obaid Afzal Atta Mohi Ud Din Muhammad Ali Raza Muhammad Jawad Hassan Samrah Afzal Awan Muhammad Ansar Muhammad Aqeel Aslam Correspondence to Mukhtar Ahmed. Authors did not have any conflict of interest. Mehmood, M.Z., Qadir, G., Afzal, O. et al. Paclobutrazol Improves Sesame Yield by Increasing Dry Matter Accumulation and Reducing Seed Shattering Under Rainfed Conditions. Int. J. Plant Prod. 15, 337–349 (2021). https://doi.org/10.1007/s42106-021-00132-w Issue Date: September 2021 DOI: https://doi.org/10.1007/s42106-021-00132-w Oilseed Seed number
CommonCrawl
Beal conjecture The Beal conjecture is the following conjecture in number theory: Unsolved problem in mathematics: Is the Beal conjecture true? (more unsolved problems in mathematics) A x + B y = C z , {\displaystyle A^{x}+B^{y}=C^{z},} where A, B, C, x, y, and z are positive integers with x, y, z > 2, then A, B, and C have a common prime factor. Equivalently, There are no solutions to the above equation in positive integers A, B, C, x, y, z with A, B, and C being pairwise coprime and all of x, y, z being greater than 2. The conjecture was formulated in 1993 by Andrew Beal, a banker and amateur mathematician, while investigating generalizations of Fermat's last theorem.[1][2] Since 1997, Beal has offered a monetary prize for a peer-reviewed proof of this conjecture or a counterexample.[3] The value of the prize has increased several times and is currently $1 million.[4] In some publications, this conjecture has occasionally been referred to as a generalized Fermat equation,[5] the Mauldin conjecture,[6] and the Tijdeman-Zagier conjecture.[7][8][9] 1 Related examples 2 Relation to other conjectures 3 Partial results 4 Prize Related examples[edit] To illustrate, the solution 3 3 + 6 3 = 3 5 {\displaystyle 3^{3}+6^{3}=3^{5}} has bases with a common factor of 3, the solution 7 3 + 7 4 = 14 3 {\displaystyle 7^{3}+7^{4}=14^{3}} has bases with a common factor of 7, and 2 n + 2 n = 2 n + 1 {\displaystyle 2^{n}+2^{n}=2^{n+1}} has bases with a common factor of 2. Indeed the equation has infinitely many solutions where the bases share a common factor, including generalizations of the above three examples, respectively 3 3 n + [ 2 ( 3 n ) ] 3 = 3 3 n + 2 , n ≥ 1 ; {\displaystyle 3^{3n}+[2(3^{n})]^{3}=3^{3n+2},\quad \quad n\geq 1;} [ b ( a n − b n ) k ] n + ( a n − b n ) k n + 1 = [ a ( a n − b n ) k ] n , a > b , b ≥ 1 , k ≥ 1 , n ≥ 3 ; {\displaystyle [b(a^{n}-b^{n})^{k}]^{n}+(a^{n}-b^{n})^{kn+1}=[a(a^{n}-b^{n})^{k}]^{n},\quad \quad a>b,\quad b\geq 1,\quad k\geq 1,\quad n\geq 3;} [ a ( a n + b n ) k ] n + [ b ( a n + b n ) k ] n = ( a n + b n ) k n + 1 , a ≥ 1 , b ≥ 1 , k ≥ 1 , n ≥ 3. {\displaystyle [a(a^{n}+b^{n})^{k}]^{n}+[b(a^{n}+b^{n})^{k}]^{n}=(a^{n}+b^{n})^{kn+1},\quad \quad a\geq 1,\quad b\geq 1,\quad k\geq 1,\quad n\geq 3.} Furthermore, for each solution (with or without coprime bases), there are infinitely many solutions with the same set of exponents and an increasing set of non-coprime bases. That is, for solution A 1 x + B 1 y = C 1 z {\displaystyle A_{1}^{x}+B_{1}^{y}=C_{1}^{z}} we additionally have A n x + B n y = C n z ; {\displaystyle A_{n}^{x}+B_{n}^{y}=C_{n}^{z};} n ≥ 2 {\displaystyle n\geq 2} A n = ( A n − 1 y z + 1 ) ( B n − 1 y z ) ( C n − 1 y z ) {\displaystyle A_{n}=(A_{n-1}^{yz+1})(B_{n-1}^{yz})(C_{n-1}^{yz})} B n = ( A n − 1 x z ) ( B n − 1 x z + 1 ) ( C n − 1 x z ) {\displaystyle B_{n}=(A_{n-1}^{xz})(B_{n-1}^{xz+1})(C_{n-1}^{xz})} C n = ( A n − 1 x y ) ( B n − 1 x y ) ( C n − 1 x y + 1 ) {\displaystyle C_{n}=(A_{n-1}^{xy})(B_{n-1}^{xy})(C_{n-1}^{xy+1})} Any solutions to the Beal conjecture will necessarily involve three terms all of which are 3-powerful numbers, i.e. numbers where the exponent of every prime factor is at least three. It is known that there are an infinite number of such sums involving coprime 3-powerful numbers;[10] however, such sums are rare. The smallest two examples are: 271 3 + 2 3 3 5 73 3 = 919 3 = 776,151,559 3 4 29 3 89 3 + 7 3 11 3 167 3 = 2 7 5 4 353 3 = 3,518,958,160,000 {\displaystyle {\begin{aligned}271^{3}+2^{3}\ 3^{5}\ 73^{3}=919^{3}&=776{,}151{,}559\\3^{4}\ 29^{3}\ 89^{3}+7^{3}\ 11^{3}\ 167^{3}=2^{7}\ 5^{4}\ 353^{3}&=3{,}518{,}958{,}160{,}000\\\end{aligned}}} What distinguishes Beal's conjecture is that it requires each of the three terms to be expressible as a single power. Relation to other conjectures[edit] Fermat's Last Theorem established that A n + B n = C n {\displaystyle A^{n}+B^{n}=C^{n}} has no solutions for n > 2 for positive integers A, B, and C. If any solutions had existed to Fermat's Last Theorem, then by dividing out every common factor, there would also exist solutions with A, B, and C coprime. Hence, Fermat's Last Theorem can be seen as a special case of the Beal conjecture restricted to x = y = z. The Fermat–Catalan conjecture is that A x + B y = C z {\displaystyle A^{x}+B^{y}=C^{z}} has only finitely many solutions with A, B, and C being positive integers with no common prime factor and x, y, and z being positive integers satisfying 1 x + 1 y + 1 z < 1. {\displaystyle {\frac {1}{x}}+{\frac {1}{y}}+{\frac {1}{z}}<1.} Beal's conjecture can be restated as "All Fermat–Catalan conjecture solutions will use 2 as an exponent." The abc conjecture would imply that there are at most finitely many counterexamples to Beal's conjecture. Partial results[edit] In the cases below where 2 is an exponent, multiples of 2 are also proven, since a power can be squared. Similarly, where n is an exponent, multiples of n are also proven. Where solutions involving a second power are alluded to below, they can be found specifically at Fermat-Catalan conjecture#Known solutions. The case gcd(x, y, z) ≥ 3 is implied by Fermat's Last Theorem. The case (x, y, z) = (2, 4, 4) and all its permutations were proven to have no solutions by Pierre de Fermat in the 1600s. (See one proof here and another here.) A potential class of solutions to the equation, namely those with A, B, C also forming a Pythagorean triple, were considered by L. Jesmanowicz in the 1950s. J. Jozefiak proved that there are an infinite number of primitive Pythagorean triples that cannot satisfy the Beal equation. Further results are due to Chao Ko.[11] The case x = y = z > 2 is Fermat's Last Theorem, proven to have no solutions by Andrew Wiles in 1994.[12] The cases (x, y, z) = (2, n, n) and all its permutations were proved for n ≥ 4 by Darmon and Merel in 1995.[13] The case (x, y, z) = (n, 4, 4) and all its permutations have been proven for n ≥ 2.[14] The case (x, y, z) = (5, 2n, 2n) and all its permutations were proved for n ≥ 2 by Chen.[14] The impossibility of the case A = 1 or B = 1 is implied by Catalan's conjecture, proven in 2002 by Preda Mihăilescu. (Notice C cannot be 1, or one of A and B must be 0, which is not permitted.) The case (x, y, z) = (2, 3, 7) and all its permutations were proven to have only five solutions, none of them involving an even power greater than 2, by Bjorn Poonen, Edward F. Schaefer, and Michael Stoll in 2005.[15] The case (x, y, z) = (2, 3, 8) and all its permutations are known to have only three solutions, none of them involving an even power greater than 2.[14] The case (x, y, z) = (2, 3, 9) and all its permutations are known to have only two solutions, neither of them involving an even power greater than 2.[14][9] The case (x, y, z) = (2, 3, 10) and all its permutations were proved by David Brown in 2009 (other than 110 + 23 = 32).[16] The case (x, y, z) = (2, 3, 2n) and all its permutations were proved for 5 ≤ n ≤ 1000 except n = 7 and n = 31 by Chen (other than 12n + 23 = 32).[14] The case (x, y, z) = (2, 4, 5) and all its permutations are known to have only two solutions, neither of them involving an even power greater than 2.[14] The case (x, y, z) = (2, 4, n) and all its permutations were proved for n ≥ 6 by Michael Bennet, Jordan Ellenberg, and Nathan Ng in 2009.[17] The case (x, y, z) = (2, 3, 15) and all its permutations were proved by Samir Siksek and Michael Stoll in 2013.[18] The case (x, y, z) = (3, 3, n) and all its permutations have been proven for 3 ≤ n ≤ 109.[14] The cases (5, 5, 7), (5, 5, 19), and (7, 7, 5) and all their permutations were proved by Sander R. Dahmen and Samir Siksek in 2013.[19] The Darmon–Granville theorem uses Faltings's theorem to show that for every specific choice of exponents (x, y, z), there are at most finitely many coprime solutions for (A, B, C).[20][7]:p. 64 Peter Norvig, Director of Research at Google, reported having conducted a series of numerical searches for counterexamples to Beal's conjecture. Among his results, he excluded all possible solutions having each of x, y, z ≤ 7 and each of A, B, C ≤ 250,000, as well as possible solutions having each of x, y, z ≤ 100 and each of A, B, C ≤ 10,000.[21] Prize[edit] For a published proof or counterexample, banker Andrew Beal initially offered a prize of US $5,000 in 1997, raising it to $50,000 over ten years,[3] but has since raised it to US $1,000,000.[4] The American Mathematical Society (AMS) holds the $1 million prize in a trust until the Beal conjecture is solved.[22] It is supervised by the Beal Prize Committee (BPC), which is appointed by the AMS president.[23] Variants[edit] The counterexamples 7 3 + 13 2 = 2 9 {\displaystyle 7^{3}+13^{2}=2^{9}} and 1 m + 2 3 = 3 2 {\displaystyle 1^{m}+2^{3}=3^{2}} show that the conjecture would be false if one of the exponents were allowed to be 2. The Fermat–Catalan conjecture is an open conjecture dealing with such cases. If we allow that at most one of the exponents is 2, then there may be only finitely many solutions (except the case 1 m + 2 3 = 3 2 {\displaystyle 1^{m}+2^{3}=3^{2}} ). If A, B, C can have a common prime factor then the conjecture is not true; a classic counterexample is 2 10 + 2 10 = 2 11 {\displaystyle 2^{10}+2^{10}=2^{11}} . A variation of the conjecture asserting that x, y, z (instead of A, B, C) must have a common prime factor is not true. A counterexample is 27 4 + 162 3 = 9 7 , {\displaystyle 27^{4}+162^{3}=9^{7},} in which 4, 3, and 7 have no common prime factor. (In fact, the maximum common prime factor of the exponents that is valid is 2; a common factor greater than 2 would be a counterexample to Fermat's Last Theorem.) The conjecture is not valid over the larger domain of Gaussian integers. After a prize of $50 was offered for a counterexample, Fred W. Helenius provided ( − 2 + i ) 3 + ( − 2 − i ) 3 = ( 1 + i ) 4 . {\displaystyle (-2+i)^{3}+(-2-i)^{3}=(1+i)^{4}.} [24] Euler's sum of powers conjecture Jacobi–Madden equation Prouhet–Tarry–Escott problem Taxicab number Pythagorean quadruple Sums of powers, a list of related conjectures and theorems ^ "Beal Conjecture". American Mathematical Society. Retrieved 21 August 2016. ^ "Beal Conjecture". Bealconjecture.com. Retrieved 2014-03-06. ^ a b R. Daniel Mauldin (1997). "A Generalization of Fermat's Last Theorem: The Beal Conjecture and Prize Problem" (PDF). Notices of the AMS. 44 (11): 1436–1439. ^ a b "Beal Prize". Ams.org. Retrieved 2014-03-06. ^ Bennett, Michael A.; Chen, Imin; Dahmen, Sander R.; Yazdani, Soroosh (June 2014). "Generalized Fermat Equations: A Miscellany" (PDF). Simon Fraser University. Retrieved 1 October 2016. ^ "Mauldin / Tijdeman-Zagier Conjecture". Prime Puzzles. Retrieved 1 October 2016. ^ a b Elkies, Noam D. (2007). "The ABC's of Number Theory" (PDF). The Harvard College Mathematics Review. 1 (1). ^ Michel Waldschmidt (2004). "Open Diophantine Problems". Moscow Mathematical Journal. 4: 245–305. arXiv:math/0312440. doi:10.17323/1609-4514-2004-4-1-245-305. ^ a b Crandall, Richard; Pomerance, Carl (2000). Prime Numbers: A Computational Perspective. Springer. p. 417. ISBN 978-0387-25282-7. ^ Nitaj, Abderrahmane (1995). "On A Conjecture of Erdos on 3-Powerful Numbers". Bulletin of the London Mathematical Society. 27 (4): 317–318. CiteSeerX 10.1.1.24.563. doi:10.1112/blms/27.4.317. ^ Wacław Sierpiński, Pythagorean triangles, Dover, 2003, p. 55 (orig. Graduate School of Science, Yeshiva University, 1962). ^ "Billionaire Offers $1 Million to Solve Math Problem | ABC News Blogs – Yahoo". Gma.yahoo.com. 2013-06-06. Retrieved 2014-03-06. ^ a b H. Darmon and L. Merel. Winding quotients and some variants of Fermat's Last Theorem, J. Reine Angew. Math. 490 (1997), 81–100. ^ a b c d e f g Frits Beukers (January 20, 2006). "The generalized Fermat equation" (PDF). Staff.science.uu.nl. Retrieved 2014-03-06. ^ Poonen, Bjorn; Schaefer, Edward F.; Stoll, Michael (2005). "Twists of X(7) and primitive solutions to x2 + y3 = z7". Duke Mathematical Journal. 137: 103–158. arXiv:math/0508174. doi:10.1215/S0012-7094-07-13714-1. ^ Brown, David (2009). "Primitive Integral Solutions to x2 + y3 = z10". arXiv:0911.2932 [math.NT]. ^ "The Diophantine Equation" (PDF). Math.wisc.edu. Retrieved 2014-03-06. ^ Siksek, Samir; Stoll, Michael (2013). "The Generalised Fermat Equation x2 + y3 = z15". Archiv der Mathematik. 102 (5): 411–421. arXiv:1309.4421. doi:10.1007/s00013-014-0639-z. ^ Dahmen, Sander R.; Siksek, Samir (2013). "Perfect powers expressible as sums of two fifth or seventh powers". arXiv:1309.4030 [math.NT]. ^ Darmon, H.; Granville, A. (1995). "On the equations zm = F(x, y) and Axp + Byq = Czr". Bulletin of the London Mathematical Society. 27 (6): 513–43. doi:10.1112/blms/27.6.513. ^ Norvig, Peter. "Beal's Conjecture: A Search for Counterexamples". Norvig.com. Retrieved 2014-03-06. ^ Walter Hickey (5 June 2013). "If You Can Solve This Math Problem, Then A Texas Banker Will Give You $1 Million". Business Insider. Retrieved 8 July 2016. ^ "$1 Million Math Problem: Banker D. Andrew Beal Offers Award To Crack Conjecture Unsolved For 30 Years". International Science Times. 5 June 2013. Archived from the original on 29 September 2017. ^ "Neglected Gaussians". Mathpuzzle.com. Retrieved 2014-03-06. The Beal Prize office page Bealconjecture.com Math.unt.edu Beal Conjecture at PlanetMath.org. Mathoverflow.net discussion about the name and date of origin of the theorem Retrieved from "https://en.wikipedia.org/w/index.php?title=Beal_conjecture&oldid=918935000" Diophantine equations Unsolved problems in mathematics
CommonCrawl
Variation in atomic radii of elements in different blocks? If we look at the values for the atomic radii (look at the table here), we can see that they rapidly decrease across the period initially. Looking at the second period, The graph is pretty steep early on. But further down the period, as we enter the p-block elements, the graph levels out. This trend is repeated across the third and higher periods as well. Why is that? The variation of atomic radii across the d-block and f-block elements is even more gradual. Why is that again? periodic-trends atomic-radius GerardGerard Valence electrons experience an electrostatic force from the nucleus. The nucleus has its character positive charge, but due to shielding by core electrons the total positive charge is not completely felt by the electrons, so the actual net positive charge felt by the valence electron has its own name and is called the effective nuclear charge $Z_{eff}$. This effective nuclear charge, largely effects the size of the atom. The general trend of the periodic table is that as you go down a group, and go from right to left, the $Z_{eff}$ decreases, and you see an increase in the size of the atom. The easy way to think of it, is that as you go from left to right, the number of valence electrons, and protons increases but the number core electrons stays the same; so, you see an increase in $Z_{eff}$. $Z_{eff}$ is calculated using Slater's Rules. Where $$Z_{eff}=Z-s$$ Where $Z$ is the atomic number and $s$ is the shielding constant. The rules for calculating are as follows (from wikipedia): Firstly,(1)(4) the electrons are arranged into a sequence of groups in order of increasing principal quantum number n, and for equal n in order of increasing azimuthal quantum number l, except that s- and p- orbitals are kept together. $\mathrm{(1s) (2s,2p) (3s,3p) (3d) (4s,4p) (4d) (4f) (5s, 5p) (5d)}$ etc. Each group is given a different shielding constant which depends upon the number and types of electrons in those groups preceding it. The shielding constant for each group is formed as the sum of the following contributions: An amount of 0.35 from each other electron within the same group except for the (1s) group, where the other electron contributes only 0.30. If the group is of the (s p) type, an amount of 0.85 from each electron with principal quantum number n one less than that of the group, and an amount of 1.00 for each electron with principal quantum number two or more less. If the group is of the (d) or (f), type, an amount of 1.00 for each electron "closer" to the atom than the group. This includes i) electrons with a smaller principal quantum number n and ii) electrons with an equal principal quantum number and a smaller azimuthal quantum number l. John SnowJohn Snow $\begingroup$ I know about Slater's rule. I just don't see how it applies to my question. $\endgroup$ – Gerard May 21 '15 at 1:22 $\begingroup$ What's confusing? I explain why atomic radii size is affected by its $z_{eff}$ and the rules explain how d and f block electrons contribute differently than s and p block.. $\endgroup$ – John Snow May 21 '15 at 1:37 Not the answer you're looking for? Browse other questions tagged periodic-trends atomic-radius or ask your own question. Brittle d-block metal trend Why is periodicity seen in these certain properties? Why are there peaks in electronegativities in d-block elements? Variation in atomic sizes in the transition elements Instability of heavy hydrides Does electron shielding increase or stay constant moving LEFT to RIGHT across a period? Regular decrease in the atomic radius of 3d series
CommonCrawl
Regular and Chaotic Dynamics ISSN 1560-3547 (print), 1468-4845 (on-line) Distributed by Vyacheslav Grines Address: ul. Bolshaya Pecherskaya 25/12, Nizhnii Novgorod, 603155, Russia E-mail: [email protected] University: National Research University Higher School of Economics Professor: HSE Campus in Nizhny Novgorod, Faculty of Informatics, Mathematics, and Computer Science (HSE Nizhny Novgorod), Department of Funda- mental Mathematics; Chief Research Fellow: HSE Campus in Nizhny Novgorod / Faculty of Informatics, Mathematics, and Computer Science (HSE Nizhny Novgorod), International Laboratory of Dynamical Systems and Applications Born: December 13, 1946 in Isyaslavl', Ukraina. 2015-Present: Professor: HSE Campus in Nizhny Novgorod, Faculty of Informatics, Mathematics, and Computer Science (HSE Nizhny Novgorod), Department of Fundamental Mathematics; Chief Research Fellow: HSE Campus in Nizhny Novgorod, Faculty of Informatics, Mathematics, and Computer Science (HSE Nizhny Novgorod), International Laboratory of Dynamical Systems and Applications; 2013-2015: Professor of department of numerical and functional analysis, Lobachevskii State University, Nizhnii Novgorod; 1977-2013: Professor of Mathematics, Head of department of mathematics of Nizhny Novgorod State Agriculture Academy; 1969-1977: Researcher, Res.Inst. of Appl. Math.&Cybernetics, State University, N.Novgorod Scientific degrees: 1976: candidate of physical and mathematical sciences. 1998: doctor of physical and mathematical sciences. Dynamical Systems and Foliations on Manifolds. Grines V. Z., Medvedev V. S., Zhuzhoma E. V. On the Topological Structure of Manifolds Supporting Axiom A Systems 2022, vol. 27, no. 6, pp. 613-628 Let $M^n$, $n\geqslant 3$, be a closed orientable $n$-manifold and $\mathbb{G}(M^n)$ the set of A-diffeomorp\-hisms $f: M^n\to M^n$ whose nonwandering set satisfies the following conditions: $(1)$ each nontrivial basic set of the nonwandering set is either an orientable codimension one expanding attractor or an orientable codimension one contracting repeller; $(2)$ the invariant manifolds of isolated saddle periodic points intersect transversally and codimension one separatrices of such points can intersect only one-dimensional separatrices of other isolated periodic orbits. We prove that the ambient manifold $M^n$ is homeomorphic to either the sphere $\mathbb S^n$ or the connected sum of $k_f \geqslant 0$ copies of the torus $\mathbb T^n$, $\eta_f\geqslant 0$ copies of $\mathbb S^{n-1}\times \mathbb S^1$ and $l_f\geqslant 0$ simply connected manifolds $N^n_1, \dots, N^n_{l_f}$ which are not homeomorphic to the sphere. Here $k_f\geqslant 0$ is the number of connected components of all nontrivial basic sets, $\eta_{f}=\frac{\kappa_f}{2} -k_f+\frac{\nu_f - \mu_f +2}{2},$ $ \kappa_f\geqslant 0$ is the number of bunches of all nontrivial basic sets, $\mu_f\geqslant 0$ is the number of sinks and sources, $\nu_f\geqslant 0$ is the number of isolated saddle periodic points with Morse index $1$ or $n-1$, $0\leqslant l_f\leqslant \lambda_f$, $\lambda_f\geqslant 0$ is the number of all periodic points whose Morse index does not belong to the set $\{0,1,n-1,n\}$ of diffeomorphism $f$. Similar statements hold for gradient-like flows on $M^n$. In this case there are no nontrivial basic sets in the nonwandering set of a flow. As an application, we get sufficient conditions for the existence of heteroclinic intersections and periodic trajectories for Morse – Smale flows. Keywords: Decomposition of manifolds, axiom A systems, Morse – Smale systems, heteroclinic intersections Citation: Grines V. Z., Medvedev V. S., Zhuzhoma E. V., On the Topological Structure of Manifolds Supporting Axiom A Systems, Regular and Chaotic Dynamics, 2022, vol. 27, no. 6, pp. 613-628 Grines V. Z., Gurevich E. Y., Pochinka O. V. On the Number of Heteroclinic Curves of Diffeomorphisms with Surface Dynamics Separators are fundamental plasma physics objects that play an important role in many astrophysical phenomena. Looking for separators and their number is one of the first steps in studying the topology of the magnetic field in the solar corona. In the language of dynamical systems, separators are noncompact heteroclinic curves. In this paper we give an exact lower estimation of the number of noncompact heteroclinic curves for a 3-diffeomorphism with the so-called "surface dynamics". Also, we prove that ambient manifolds for such diffeomorphisms are mapping tori. Keywords: separator in a magnetic field, heteroclinic curves, mapping torus, gradient-like diffeomorphisms Citation: Grines V. Z., Gurevich E. Y., Pochinka O. V., On the Number of Heteroclinic Curves of Diffeomorphisms with Surface Dynamics, Regular and Chaotic Dynamics, 2017, vol. 22, no. 2, pp. 122-135 Grines V. Z., Malyshev D. S., Pochinka O. V., Zinina S. K. Efficient Algorithms for the Recognition of Topologically Conjugate Gradient-like Diffeomorhisms It is well known that the topological classification of structurally stable flows on surfaces as well as the topological classification of some multidimensional gradient-like systems can be reduced to a combinatorial problem of distinguishing graphs up to isomorphism. The isomorphism problem of general graphs obviously can be solved by a standard enumeration algorithm. However, an efficient algorithm (i. e., polynomial in the number of vertices) has not yet been developed for it, and the problem has not been proved to be intractable (i. e., NPcomplete). We give polynomial-time algorithms for recognition of the corresponding graphs for two gradient-like systems. Moreover, we present efficient algorithms for determining the orientability and the genus of the ambient surface. This result, in particular, sheds light on the classification of configurations that arise from simple, point-source potential-field models in efforts to determine the nature of the quiet-Sun magnetic field. Keywords: Morse – Smale diffeomorphism, gradient-like diffeomorphism, topological classification, three-color graph, directed graph, graph isomorphism, surface orientability, surface genus, polynomial-time algorithm, magnetic field Citation: Grines V. Z., Malyshev D. S., Pochinka O. V., Zinina S. K., Efficient Algorithms for the Recognition of Topologically Conjugate Gradient-like Diffeomorhisms, Regular and Chaotic Dynamics, 2016, vol. 21, no. 2, pp. 189-203 Grines V. Z., Levchenko Y. A., Medvedev V. S., Pochinka O. V. On the Dynamical Coherence of Structurally Stable 3-diffeomorphisms We prove that each structurally stable diffeomorphism $f$ on a closed 3-manifold $M^3$ with a two-dimensional surface nonwandering set is topologically conjugated to some model dynamically coherent diffeomorphism. Keywords: structural stability, surface basic set, partial hyperbolicity, dynamical coherence Citation: Grines V. Z., Levchenko Y. A., Medvedev V. S., Pochinka O. V., On the Dynamical Coherence of Structurally Stable 3-diffeomorphisms, Regular and Chaotic Dynamics, 2014, vol. 19, no. 4, pp. 506-512 Grines V. Z., Pochinka O. V. Energy functions for dynamical systems 2010, vol. 15, no. 2-3, pp. 185-193 The paper contains exposition of results devoted to the existence of an energy functions for dynamical systems. Keywords: Lyapunov function, energy function, Morse–Smale system Citation: Grines V. Z., Pochinka O. V., Energy functions for dynamical systems, Regular and Chaotic Dynamics, 2010, vol. 15, no. 2-3, pp. 185-193 Grines V. Z., Zhuzhoma E. V. Expanding attractors The article is a survey on local and global structures (including classification results) of expanding attractors of diffeomorphisms $f : M \to M$ of a closed smooth manifold $M$. Beginning with the most familiar expanding attractors (Smale solenoid; DA-attractor; Plykin attractor; Robinson–Williams attractors), one reviews the Williams theory, Bothe's classification of one-dimensional solenoids in 3-manifolds, Grines–Plykin–Zhirov's classification of one-dimensional expanding attractors on surfaces, and Grines–Zhuzhoma's classification of codimension one expanding attractors of structurally stable diffeomorphisms. The main theorems are endowed with ideas of proof Keywords: Axiom A diffeomorphisms, (codimension one) expanding attractors, structurally stable diffeomorphisms, hyperbolic automorphisms Citation: Grines V. Z., Zhuzhoma E. V., Expanding attractors , Regular and Chaotic Dynamics, 2006, vol. 11, no. 2, pp. 225-246 DOI:10.1070/RD2006v011n02ABEH000347 © Institute of Computer Science Izhevsk, 2005 - 2023
CommonCrawl
Published by editor on March 25, 2017 Classical Branch Structure from Spatial Redundancy in a Many-Body Wave Function PRL: General Physics: Statistical and Quantum Mechanics, Quantum Information, etc. Author(s): C. Jess Riedel When the wave function of a large quantum system unitarily evolves away from a low-entropy initial state, there is strong circumstantial evidence it develops "branches": a decomposition into orthogonal components that is indistinguishable from the corresponding incoherent mixture with feasible obser… [Phys. Rev. Lett. 118, 120402] Published Fri Mar 24, 2017 Quantum Potential induced UV-IR coupling in Analogue Hawking radiation: From Bose-Einstein Condensates to canonical acoustic black holes. (arXiv:1703.08027v1 [gr-qc]) Authors: Supratik Sarkar, A. Bhattacharyay Arising out of a Non-local non-relativistic BEC, we present an Analogue gravity model upto $\mathcal{O}(\xi^{2})$ accuracy in the presence of the quantum potential term for a canonical acoustic BH in $(3+1)$-d spacetime where the series solution of the free minimally coupled KG equation for the large length scale massive scalar modes is derived. We systematically address the issues of the presence of the quantum potential term being the root cause of a UV-IR coupling between short wavelength "primary" modes which are supposedly Hawking radiated through the sonic event horizon and the large wavelength "secondary" modes. In the quantum gravity experiments of analogue Hawking radiation in the laboratory, this UV-IR coupling is inevitable and one can not get rid of these large wavelength excitations which would grow over space by gaining energy from the short wavelength Hawking radiated modes. We identify the characteristic feature in the growth rate(s) that would distinguish these primary and secondary modes. Space QUEST mission proposal: Experimentally testing decoherence due to gravity. (arXiv:1703.08036v1 [quant-ph]) Authors: Siddarth Koduru Joshi, Jacques Pienaar, Timothy C. Ralph, Luigi Cacciapuoti, Will McCutcheon, John Rarity, Dirk Giggenbach, Vadim Makarov, Ivette Fuentes, Thomas Scheidl, Erik Beckert, Mohamed Bourennane, David Edward Bruschi, Adan Cabello, Jose Capmany, José A. Carrasco, Alberto Carrasco-Casado,Eleni Diamanti, Miloslav Duusek, Dominique Elser, Angelo Gulinatti, Robert H. Hadfield, Thomas Jennewein,Rainer Kaltenbaek, Michael A. Krainak, Hoi-Kwong Lo, Christoph Marquardt, Paolo Mataloni, Gerard Milburn,Momtchil Peev, Andreas Poppe, Valerio Pruneri, Renato Renner, Christophe Salomon, Johannes Skaar,Nikolaos Solomos, Mario Stipčević, Juan P. Torres, Morio Toyoshima, Paolo Villoresi, Ian Walmsley, Gregor Weihs, Harald Weinfurter, Anton Zeilinger, Marek Żukowski, Rupert Ursin Models of quantum systems on curved space-times lack sufficient experimental verification. Some speculative theories suggest that quantum properties, such as entanglement, may exhibit entirely different behavior to purely classical systems. By measuring this effect or lack thereof, we can test the hypotheses behind several such models. For instance, as predicted by Ralph and coworkers [T C Ralph, G J Milburn, and T Downes, Phys. Rev. A, 79(2):22121, 2009; T C Ralph and J Pienaar, New Journal of Physics, 16(8):85008, 2014], a bipartite entangled system could decohere if each particle traversed through a different gravitational field gradient. We propose to study this effect in a ground to space uplink scenario. We extend the above theoretical predictions of Ralph and coworkers and discuss the scientific consequences of detecting/failing to detect the predicted gravitational decoherence. We present a detailed mission design of the European Space Agency's (ESA) Space QUEST (Space – Quantum Entanglement Space Test) mission, and study the feasibility of the mission schema. The SIC Question: History and State of Play. (arXiv:1703.07901v1 [quant-ph]) Authors: Christopher A. Fuchs, Michael C. Hoang, Blake C. Stacey Recent years have seen significant advances in the study of symmetric informationally complete (SIC) quantum measurements, also known as maximal sets of complex equiangular lines. Previously, the published record contained solutions up to dimension 67, and was with high confidence complete up through dimension 50. Computer calculations have now furnished solutions in all dimensions up to 151, and in several cases beyond that, as large as dimension 323. These new solutions exhibit an additional type of symmetry beyond the basic definition of a SIC, and so verify a conjecture of Zauner in many new cases. The solutions in dimensions 68 through 121 were obtained by Andrew Scott, and his catalogue of distinct solutions is, with high confidence, complete up to dimension 90. Additional results in dimensions 122 through 151 were calculated by the authors using Scott's code. We recap the history of the problem, outline how the numerical searches were done, and pose some conjectures on how the search technique could be improved. In order to facilitate communication across disciplinary boundaries, we also present a comprehensive bibliography of SIC research. Transformation properties and entanglement of relativistic qubits under space-time and gauge transformations. (arXiv:1703.07998v1 [quant-ph]) Authors: Xavier Calmet, Jacob Dunningham We revisit the properties of qubits under Lorentz transformations and, by considering Lorentz invariant quantum states in the Heisenberg formulation, clarify some misleading notation that has appeared in the literature on relativistic quantum information theory. We then use this formulation to consider the transformation properties of qubits and density matrices under space-time and gauge transformations. Finally we use our results to understand the behaviour of entanglement between different partitions of quantum systems. Our approach not only clarifies the notation, but provides a more intuitive and simple way of gaining insight into the behaviour of relativistic qubits. In particular, it allows us to greatly generalize the results in the current literature as well as substantially simplifying the calculations that are needed. Quantum time delay in the gravitational field of a rotating mass. (arXiv:1703.08095v1 [gr-qc]) Authors: Emmanuele Battista, Angelo Tartaglia, Giampiero Esposito, David Lucchesi, Matteo Luca Ruggiero,Pavol Valko, Simone Dell' Agnello, Luciano Di Fiore, Jules Simo, Aniello Grado We examine quantum corrections of time delay arising in the gravitational field of a spinning oblate source. Low-energy quantum effects occurring in Kerr geometry are derived within a framework where general relativity is fully seen as an effective field theory. By employing such a pattern, gravitational radiative modifications of Kerr metric are derived from the energy-momentum tensor of the source, which at lowest order in the fields is modelled as a point mass. Therefore, in order to describe a quantum corrected version of time delay in the case in which the source body has a finite extension, we introduce a hybrid scheme where quantum fluctuations affect only the monopole term occurring in the multipole expansion of the Newtonian potential. The predicted quantum deviation from the corresponding classical value turns out to be too small to be detected in the next future, showing that new models should be examined in order to test low-energy quantum gravity within the solar system. Heads or tails in zero gravity: an example of a classical contextual "measurement". (arXiv:1703.07550v1 [quant-ph]) Authors: Alexandre Gondran (MAIAA), Michel Gondran (AEIS) Playing the game of heads or tails in zero gravity demonstrates that there exists a contextual "measurement" in classical mechanics. When the coin is flipped, its orientation is a continuous variable. However, the "measurement" that occurs when the coin is caught by clapping two hands together gives a discrete value (heads or tails) that depends on the context (orientation of the hands). It is then shown that there is a strong analogy with the spin measurement of the Stern-Gerlach experiment, and in particular with Stern and Gerlach's sequential measurements. Finally, we clarify the analogy by recalling how the de Broglie-Bohm interpretation simply explains the spin "measurement". The Overview Chapter in Loop Quantum Gravity: The First 30 Years. (arXiv:1703.07396v1 [gr-qc]) Authors: Abhay Ashtekar, Jorge Pullin This is the introductory Chapter in the monograph Loop Quantum Gravity: The First 30 Years, edited by the authors, that was just published in the series "100 Years of General Relativity. The 8 invited Chapters that follow provide fresh perspectives on the current status of the field from some of the younger and most active leaders who are currently shaping its development. The purpose of this Chapter is to provide a global overview by bridging the material covered in subsequent Chapters. The goal and scope of the monograph is described in the Preface which can be read by following the Front Matter link at the website listed below. Evolution of Universes in Causal Set Cosmology. (arXiv:1703.07556v1 [gr-qc]) Authors: Fay Dowker, Stav Zalel The causal set approach to the problem of quantum gravity is based on the hypothesis that spacetime is fundamentally discrete. Spacetime discreteness opens the door to novel types of dynamical law for cosmology and the Classical Sequential Growth (CSG) models of Rideout and Sorkin form an interesting class of such laws. It has been shown that a renormalisation of the dynamical parameters of a CSG model occurs whenever the universe undergoes a Big Crunch-Big Bang bounce. In this paper we propose a way to model the creation of a new universe after the singularity of a black hole. We show that renormalisation of dynamical parameters occurs in a CSG model after such a creation event. We speculate that this could realise aspects of Smolin's Cosmological Natural Selection proposal. Primordial Black Holes as Dark Matter. (arXiv:1607.06077v4 [astro-ph.CO] UPDATED) Authors: Bernard Carr, Florian Kuhnel, Marit Sandstad The possibility that the dark matter comprises primordial black holes (PBHs) is considered, with particular emphasis on the currently allowed mass windows at $10^{16}$ – $10^{17}\,$g, $10^{20}$ – $10^{24}\,$g and $1$ – $10^{3}\,M_{\odot}$. The Planck mass relics of smaller evaporating PBHs are also considered. All relevant constraints (lensing, dynamical, large-scale structure and accretion) are reviewed and various effects necessary for a precise calculation of the PBH abundance (non-Gaussianity, non-sphericity, critical collapse and merging) are accounted for. It is difficult to put all the dark matter in PBHs if their mass function is monochromatic but this is still possible if the mass function is extended, as expected in many scenarios. A novel procedure for confronting observational constraints with an extended PBH mass spectrum is therefore introduced. This applies for arbitrary constraints and a wide range of PBH formation models, and allows us to identify which model-independent conclusions can be drawn from constraints over all mass ranges. We focus particularly on PBHs generated by inflation, pointing out which effects in the formation process influence the mapping from the inflationary power spectrum to the PBH mass function. We then apply our scheme to two specific inflationary models in which PBHs provide the dark matter. The possibility that the dark matter is in intermediate-mass PBHs of $1$ – $10^{3}\,M_{\odot}$ is of special interest in view of the recent detection of black-hole mergers by LIGO. The possibility of Planck relics is also intriguing but virtually untestable. Improved noninterferometric test of collapse models using ultracold cantilevers. (arXiv:1611.09776v2 [quant-ph] UPDATED) Authors: A. Vinante, R. Mezzena, P. Falferi, M. Carlesso, A. Bassi Spontaneous collapse models predict that a weak force noise acts on any mechanical system, as a consequence of the collapse of the wave function. Significant upper limits on the collapse rate have been recently inferred from precision mechanical experiments, such as ultracold cantilevers and the space mission LISA Pathfinder. Here, we report new results from an experiment based on a high Q cantilever cooled to millikelvin temperature, potentially able to improve by one order of magnitude the current bounds on the continuous spontaneous localization (CSL) model. High accuracy measurements of the cantilever thermal fluctuations reveal a nonthermal force noise of unknown origin. This excess noise is compatible with the CSL heating predicted by Adler. Several physical mechanisms able to explain the observed noise have been ruled out. de Broglie's double solution program: 90 years later. (arXiv:1703.06158v1 [quant-ph]) Authors: Samuel Colin, Thomas Durt, Ralph Willox Since de Broglie's pilot wave theory was revived by David Bohm in the 1950's, the overwhelming majority of researchers involved in the field have focused on what is nowadays called de Broglie-Bohm dynamics and de Broglie's original double solution program was gradually forgotten. As a result, several of the key concepts in the theory are still rather vague and ill-understood. In the light of the progress achieved over the course of the 90 years that have passed since de Broglie's presentation of his ideas at the Solvay conference of 1927, we reconsider in the present paper the status of the double solution program. More than a somewhat dusty archaeological piece of history of science, we believe it should be considered as a legitimate attempt to reconcile quantum theory with realism. Underground tests of quantum mechanics. Whispers in the cosmic silence?. (arXiv:1703.06796v1 [quant-ph]) Authors: C. Curceanu, S. Bartalucci, A. Bassi, M. Bazzi, S. Bertolucci, C. Berucci, A.M. Bragadireanu, M. Cargnelli, A. Clozza, L. De Paolis, S. Di Matteo, S. Donadi, J-P. Egger, C. Guaraldo, M. Iliescu, M. Laubenstein,J. Marton, E. Milotti, A. Pichler, D. Pietreanu, K. Piscicchia, A. Scordo, H. Shi, D. Sirghi, F. Sirghi, L. Sperandio,O. Vazquez Doce, J. Zmeskal By performing X-rays measurements in the "cosmic silence" of the underground laboratory of Gran Sasso, LNGS-INFN, we test a basic principle of quantum mechanics: the Pauli Exclusion Principle (PEP), for electrons. We present the achieved results of the VIP experiment and the ongoing VIP2 measurement aiming to gain two orders of magnitude improvement in testing PEP. We also use a similar experimental technique to search for radiation (X and gamma) predicted by continuous spontaneous localization models, which aim to solve the "measurement problem". Towards a Quantum World Wide Web. (arXiv:1703.06642v1 [cs.AI]) Authors: Diederik Aerts, Jonito Aerts Arguelles, Lester Beltran, Lyneth Beltran, Isaac Distrito, Massimiliano Sassoli de Bianchi, Sandro Sozzo, Tomas Veloz We elaborate a quantum model for corpora of written documents, like the pages forming the World Wide Web. To that end, we are guided by how physicists constructed quantum theory for microscopic entities, which unlike classical objects cannot be fully represented in our spatial theater. We suggest that a similar construction needs to be carried out by linguists and computational scientists, to capture the full meaning content of collections of documental entities. More precisely, we show how to associate a quantum-like 'entity of meaning' to a 'language entity formed by printed documents', considering the latter as the collection of traces that are left by the former, in specific results of search actions that we describe as measurements. In other words, we offer a perspective where a collection of documents, like the Web, is described as the space of manifestation of a more complex entity – the QWeb – which is the object of our modeling, drawing its inspiration from previous studies on operational-realistic approaches to quantum physics and quantum modeling of human cognition and decision-making. We emphasize that a consistent QWeb model needs to account for the observed correlations between words appearing in printed documents, e.g., co-occurrences, as the latter would depend on the 'meaning connections' existing between the concepts that are associated with these words. In that respect, we show that both 'context and interference (quantum) effects' are required to explain the probabilities calculated by counting the relative number of documents containing certain words and co-ocurrrences of words.
CommonCrawl
All CollectionsFSU Digital LibraryLicensed Electronic ResourcesResearch RepositoryTemporary Workspace Current Search: Universities & colleges (x) » doctoral thesis (x) "I Kinda Just Messed with It": Investigating Students' Resources for Learning Digital Composing Technologies Outside of Class. Keaton, Megan K., Neal, Michael R., McDowell, Stephen D., Yancey, Kathleen Blake, Fleckenstein, Kristie S., Florida State University, College of Arts and Sciences, Department of... Show moreKeaton, Megan K., Neal, Michael R., McDowell, Stephen D., Yancey, Kathleen Blake, Fleckenstein, Kristie S., Florida State University, College of Arts and Sciences, Department of English This dissertation investigates the resources that students use to learn new digital technologies to complete course assignments. This work is particularly important in a time when teachers are assigning more multimodal projects. If students are using and learning digital technologies to complete our assignments, we might argue that we should teach our students how to use the specific technologies they would use for the assignment. Yet, teaching students specific technologies is complicated... Show moreThis dissertation investigates the resources that students use to learn new digital technologies to complete course assignments. This work is particularly important in a time when teachers are assigning more multimodal projects. If students are using and learning digital technologies to complete our assignments, we might argue that we should teach our students how to use the specific technologies they would use for the assignment. Yet, teaching students specific technologies is complicated for several reasons, including limited time and resources, numerous and quickly obsolete software, different levels of expertise for students and teachers, and more. Because of these complications, students may benefit from spending less time with instruction in specific technologies and more time considering practices for learning new digital technologies. This dissertation works to discover practices that teachers can use in the classroom to help their students learn how to learn new digital technologies in order to compose multimodal texts. To do this, I investigate how students are already learning technologies outside of the classroom and use this investigation to identify possible pedagogical directions. To gain a broader understanding of the resources students are using, I surveyed five sections of an upper-level composition course in which students completed at least one digital assignment. Then, to gain a more nuanced and richer description of resource use, I interviewed three of these students. To analyze the data, I used a framework adapted from Jeanette R. Hill and Michael J. Hannafin's components for Resource-Based Learning (RBL). RBL is a pedagogical approach that aims to teach students how to learn and to produce students who are self-directed problem-solvers, able to work both collaboratively and individually. Though RBL is a pedagogical approach, I used its values and parameters as a lens for understanding students' use of resources. RBL (as the name suggests) puts emphasis on the resources students use to facilitate their learning. Given the wide variety of resources and the ways in which they can be used in the classroom, few scholars articulate precisely what RBL may look like more generally. Hill and Hannafin (2010), however, list four components among which RBL can vary: resources, tools, contexts, and scaffolds. In this study, resource is an umbrella term for the tools, contexts, and humans students may use to support their learning. Tools are the non-human objects that students use to learn new technologies. Humans are the people from whom students seek help. Contexts are the rhetorical situations (specifically the audiences and purposes for composing) surrounding the technological learning, the students' past technological experiences, and the physical locations in which students work. An important element of this study is to identify not only what resources students use, but also how they use their resources; scaffolds are how the resources are used. The scaffolds in this study are as follows: conceptual scaffolds – resources help students decide the order in which to complete tasks, understand the affordances and constraints of the technology, and learn the genre conventions of a given text; metacognitive scaffolds – resources help students tap into their prior knowledge; procedural scaffolds – resources provide students with step-by-step instructions for completing tasks or with definitions of vocabulary; and strategic scaffolds – resources encourage students to experiment in order to learn and solve problems they encounter while learning the technology. In addition to addressing what and how students use resources to learn to perform tasks with the technology, I also examined how students used resources to learn the specialized vocabulary of the technology and the technology's affordances and constraints. The study resulted in eight findings about the ways in which students are using resources. These findings were then used to identify three areas for possible strategies teachers might consider to help students use resources to learn new technologies: 1. Helping students effectively choose technologies, which includes assisting them in (a) using resources to identify technology options and learn about the affordances and constraints of the options and (b) using the affordances and constraints, their composing situations, and the available resources to choose the technology that best meets their needs. 2. Helping students effectively use templates, which includes aiding them in (a) using templates to learn about the genres in which they are composing, (b) selecting effective templates, and (c) altering the templates based on their rhetorical situations and preferences. 3. Helping students learn the technology's specialized vocabulary, which includes assisting them in (a) identifying familiar visual and linguistic vocabulary, (b) making educated guesses about unfamiliar vocabulary, and (c) using resources to learn unfamiliar vocabulary. FSU_2017SP_Keaton_fsu_0071E_13707 "Music's Most Powerful Ally": The National Federation of Music Clubs as an Institutional Leader in the Development of American Music Culture, 1898-1919. Hedrick, Ashley Geer, Bearor, Karen A. (Karen Anne), Broyles, Michael, Eyerly, Sarah, Florida State University, College of Music This dissertation explores the founding of the National Federation of Music Clubs (NFMC) in 1898 and focuses upon the organization's activities from its beginning to 1920. It highlights how the original members were able to build a strong and influential institution that continues to support American music and musicians today. The creation of the NFMC is a result of two developments that occurred simultaneously during the nineteenth-century in the United States: 1) the proliferation of... Show moreThis dissertation explores the founding of the National Federation of Music Clubs (NFMC) in 1898 and focuses upon the organization's activities from its beginning to 1920. It highlights how the original members were able to build a strong and influential institution that continues to support American music and musicians today. The creation of the NFMC is a result of two developments that occurred simultaneously during the nineteenth-century in the United States: 1) the proliferation of voluntary associations and organized reform movements and 2) the emergence of high art music culture across the nation. This project applies gender theory to examine the development of the notion of the domestic sphere as the appropriate domain for the female sex in the nineteenth century, and how women reacted to dominant ideologies through voluntary organizations that broadened their world. It also utilizes recent scholarship in women's history, social history, early American history, and institutional studies to present a survey of the types of organizations that formed and how they changed in response to the social and historical context. Even though the NFMC was originally a women's institution run by and for women, its larger goal was to disseminate art music culture through local club activities across the nation to all citizens. The growth of women's music clubs was part of the post-civil war boom of women's culture clubs. The concept of music as art developed and spread steadily during the nineteenth century, and at first the music clubs specifically cultivated art music based on western European traditions, which was associated with high class refinement. European ideals were perpetuated by an influx of European touring virtuosos and groups during the first half of the nineteenth century. In her article titled "Art Music from 1800 to 1860," Katherine K. Preston explains that the polished concerts performed by touring musicians not only circulated art music among Americans, but they also introduced higher performance standards, which resulted in increasingly higher expectations for refined performances from American audiences starting in the 1820s and 1830s and surging after 1840. These performances were supported and promoted by patrons and institutions, which ultimately led to the growth of art music appreciation as a movement throughout the nation. Michael Broyles clarifies that even though European style was dominant during the nineteenth century, American musical culture was uniquely formed by "historical events that have no European counterpart." He states that institutions controlled the character of the music in the United States. The support and dissemination of American art music happened through a combination of civic, philanthropic, private, and entrepreneurial activities, which included: the spread of art music through touring virtuosos and ensembles on a much larger scale than the first half of the nineteenth century, women's music clubs, orchestras, monster concerts and festivals, an increase in the number of American-born composers during the late nineteenth century, and a growing sense of patriotism at the turn of the century. During the late nineteenth and into the early twentieth centuries, women's music clubs became one of the most effective cultivators of classical music in the United States through their strong infrastructure and collaboration with prominent musicians, critics, and pedagogues. This project highlights the integral role of the NFMC's activities in many of the significant developments in the history of American music at this time. No other institution has been as ubiquitous or influential as the NFMC in the musical growth of the United States. This dissertation is the first detailed exploration of the history of this powerful institution. FSU_2017SP_Hedrick_fsu_0071E_13773 "What's Love Got to Do with It?": The Master-Slave Relationship in Black Women's Neo-Slave Narratives. Price, Jodi L., Montgomery, Maxine Lavon, Jones, Maxine Deloris, Moore, Dennis D., Ward, Candace, Florida State University, College of Arts and Sciences, Department of English A growing impulse in American black female fiction is the reclamation of black female sexuality due to slavery's proliferation of sexual stereotypes about black women. Because of slave law's silencing of rape culture, issues of consent, will, and agency become problematized in a larger dilemma surrounding black humanity and the repression of black female sexuality. Since the enslaved female was always assumed to be willing, because she is legally unable to give consent or resist, locating... Show moreA growing impulse in American black female fiction is the reclamation of black female sexuality due to slavery's proliferation of sexual stereotypes about black women. Because of slave law's silencing of rape culture, issues of consent, will, and agency become problematized in a larger dilemma surrounding black humanity and the repression of black female sexuality. Since the enslaved female was always assumed to be willing, because she is legally unable to give consent or resist, locating black female desire within the confines of slavery becomes largely impossible. Yet, contemporary re-imaginings of desire in this context becomes an important point of departure for re-membering contemporary black female subjectivity. "What's Love Got to Do With It?" is an alternative look at master-slave relationships, particularly those between white men and black women, featured in contemporary slave narratives by black women writers. Although black feminist critics have long considered love an unavailable, if not, unthinkable construct within the context of interracial relationships during slavery, this project locates this unexpected emotion within four neo-slave narratives. Finding moments of love and desire from, both, slaveholders and slaves, this study nuances monolithic historical players we are usually quick to adjudicate. Drawing on black feminist criticism, history, and critical race theory, this study outlines the importance of exhuming these historic relationships from silence, acknowledging the legacies they left for heterosexual love and race relations, and exploring what lessons we can take away from them today. Recognizing the ongoing tension between remembering and forgetting and the inherent value in both, this study bridges the gap by delineating the importance of perspective and the stories we choose to tell. Rather than being forever haunted by traumatic memories of the past and proliferating stories of violence and abuse, Barbara Chase-Riboud, Octavia Butler, Gayle Jones, and Gloria Naylor's novels reveal that there are ways to negotiate the past, use what you need, and come to a more holistic place where love is available. FSU_2017SP_Price_fsu_0071E_13737 The 1-Type of Algebraic K-Theory as a Multifunctor. Valdes, Yaineli, Aldrovandi, Ettore, Rawling, John Piers, Agashe, Amod S., Aluffi, Paolo, Petersen, Kathleen L., Hoeij, Mark van, Florida State University, College of Arts and... Show moreValdes, Yaineli, Aldrovandi, Ettore, Rawling, John Piers, Agashe, Amod S., Aluffi, Paolo, Petersen, Kathleen L., Hoeij, Mark van, Florida State University, College of Arts and Sciences, Department of Mathematics It is known that the category of Waldhausen categories is a closed symmetric multicategory and algebraic K-theory is a multifunctor from the category of Waldhuasen categories to the category of spectra. By assigning to any Waldhausen category the fundamental groupoid of the 1-type of its K-theory spectrum, we get a functor from the category of Waldhausen categories to the category of Picard groupoids, since stable 1-types are classified by Picard groupoids. We prove that this functor is a... Show moreIt is known that the category of Waldhausen categories is a closed symmetric multicategory and algebraic K-theory is a multifunctor from the category of Waldhuasen categories to the category of spectra. By assigning to any Waldhausen category the fundamental groupoid of the 1-type of its K-theory spectrum, we get a functor from the category of Waldhausen categories to the category of Picard groupoids, since stable 1-types are classified by Picard groupoids. We prove that this functor is a multifunctor to a corresponding multicategory of Picard groupoids. 2018_Sp_Valdes_fsu_0071E_14374 Active Control of High-Speed Free Jets Using High-Frequency Excitation. Upadhyay, Puja, Alvi, Farrukh S., Hussaini, M. Yousuff, Kumar, Rajan, Clark, Jonathan E., Gustavsson, Jonas, Florida State University, College of Engineering, Department of... Show moreUpadhyay, Puja, Alvi, Farrukh S., Hussaini, M. Yousuff, Kumar, Rajan, Clark, Jonathan E., Gustavsson, Jonas, Florida State University, College of Engineering, Department of Mechanical Engineering Control of aerodynamic noise generated by high-performance jet engines continues to remain a serious problem for the aviation community. Intense low frequency noise produced by large-scale coherent structures is known to dominate acoustic radiation in the aft angles. A tremendous amount of research effort has been dedicated towards the investigation of many passive and active flow control strategies to attenuate jet noise, while keeping performance penalties to a minimum. Unsteady excitation,... Show moreControl of aerodynamic noise generated by high-performance jet engines continues to remain a serious problem for the aviation community. Intense low frequency noise produced by large-scale coherent structures is known to dominate acoustic radiation in the aft angles. A tremendous amount of research effort has been dedicated towards the investigation of many passive and active flow control strategies to attenuate jet noise, while keeping performance penalties to a minimum. Unsteady excitation, an active control technique, seeks to modify acoustic sources in the jet by leveraging the naturally-occurring flow instabilities in the shear layer. While excitation at a lower range of frequencies that scale with the dynamics of large-scale structures, has been attempted by a number of studies, effects at higher excitation frequencies remain severely unexplored. One of the major limitations stems from the lack of appropriate flow control devices that have sufficient dynamic response and/or control authority to be useful in turbulent flows, especially at higher speeds. To this end, the current study seeks to fulfill two main objectives. First, the design and characterization of two high-frequency fluidic actuators ($25$ and $60$ kHz) are undertaken, where the target frequencies are guided by the dynamics of high-speed free jets. Second, the influence of high-frequency forcing on the aeroacoustics of high-speed jets is explored in some detail by implementing the nominally 25 kHz actuator on a Mach 0.9 ($Re_D = 5\times10^5$) free jet flow field. Subsequently, these findings are directly compared to the results of steady microjet injection experiments performed in the same rig and to prior jet noise control studies, where available. Finally, limited acoustic measurements were also performed by implementing the nominally 25 kHz actuators on jets at higher Mach numbers, including shock containing jets, and elevated temperatures. Using lumped element modeling as an initial guide, the current work expands on the previous development of low-frequency (2-8 kHz) Resonance Enhanced Micro-actuators (REM) to design actuators that are capable of producing high amplitude pulses at much higher frequencies. Extensive benchtop characterization, using acoustic measurements as well as optical diagnostics using a high resolution micro-schlieren setup, is employed to characterize the flow properties and dynamic response of these actuators. The actuators produced high-amplitude output a range of frequencies, $20.3-27.8$ kHz and $54.8-78.2$ kHz, respectively. In addition to providing information on the actuator flow physics and performances at various operating conditions, the benchtop study serves to develop relatively easy-to-integrate, high-frequency actuators for active control of high-speed jets for noise reduction. Following actuator characterization studies, the nominally 25 kHz ($St_{DF} \approx 2.2$) actuators are implemented on a Mach 0.9 free jet flow field. Eight actuators are azimuthally distributed at the nozzle exit to excite the initial shear layer at frequencies that are approximately an order of magnitude higher compared to the \textit{jet preferred frequency}, $St_P \approx 0.2-0.3$. The influence of control on the mean and turbulent characteristics of the jet, especially the developing shear layer, is examined in great detail using planar and stereoscopic Particle Image Velocimetry (PIV). Examination of cross-stream velocity profiles revealed that actuation leads to strong, spatially coherent streamwise vortex pairs which in turn significantly modify the mean flow field, resulting in a prominently undulated shear layer. These vortices grow as they convect downstream, enhancing local entrainment and significantly thickening the initial shear layer. Azimuthal inhomogeneity introduced in the jet shear layer is also evident in the simultaneous redistribution and reduction of peak turbulent fluctuations in the cross-plane near the nozzle exit. Further downstream, control results in a global suppression of turbulence intensities for all axial locations, also evidenced by a longer potential core and overall reduced jet spreading. The resulting impact on the noise signature is estimated via far-field acoustic measurements. Noise reduction was observed at low to moderate frequencies for all observation angles. Direct comparison of these results with that of steady microjet injection revealed some notable differences in the initial development of streamwise vorticity and the redistribution of peak turbulence in the azimuthal direction. However, despite significant differences in the near nozzle aerodynamics, the downstream evolution of the jet appeared to approach near similar conditions with both high-frequency and steady microjet injection. Moreover, the impact on far-field noise was also comparable between the two injection methods as well as with others reported in the literature. Finally, for jets at higher Mach numbers and elevated temperatures, the effect of control was observed to vary with jet conditions. While the impact of the two control mechanisms were fairly comparable on non-shock containing jets, high-frequency forcing was observed to produce significantly larger reductions in screech and broadband shock-associated noise (BBSN) at select under-expanded jet conditions. The observed variations in control effects at different jet conditions call for further investigation. FSU_FALL2017_Upadhyay_fsu_0071E_14154 Active Control of Salient Flow Features in the Wake of a Ground Vehicle. McNally, Jonathan William, Alvi, Farrukh S., Jung, Sungmoon, Kumar, Rajan, Taira, Kunihiko, Hahn, Seung Yong, Florida State University, College of Engineering, Department of... Show moreMcNally, Jonathan William, Alvi, Farrukh S., Jung, Sungmoon, Kumar, Rajan, Taira, Kunihiko, Hahn, Seung Yong, Florida State University, College of Engineering, Department of Mechanical Engineering Aerodynamics of road vehicles have continued to be a topic of interest due the relationship between fuel efficiency and the environmental impact of passenger vehicles. With the streamlining of ground vehicles combined with years of geometric and shape optimization, other techniques are required to continue to improve upon fuel consumption. One such technique leverages aerodynamics to minimize drag through the implementation of flow control techniques. The current study focuses on the... Show moreAerodynamics of road vehicles have continued to be a topic of interest due the relationship between fuel efficiency and the environmental impact of passenger vehicles. With the streamlining of ground vehicles combined with years of geometric and shape optimization, other techniques are required to continue to improve upon fuel consumption. One such technique leverages aerodynamics to minimize drag through the implementation of flow control techniques. The current study focuses on the application of active flow control in ground vehicle applications, employing linear arrays of discrete microjets on the rear of a 25 Ahmed model. The locations of the arrays are selected to test the effectiveness of microjet control at directly manipulating the various features found in typical flow fields generated by ground vehicles. Parametric sweeps are conducted to investigate the flow response as a function of jet velocity, momentum, and vehicle scaling. The effect and effciency of the control are quantified through aerodynamic force measurements, while local modifications are investigated via particle image velocimetry and static pressure measurements on the rear surfaces of the model. Microjets proved most effective when utilized for separation control producing a maximum change to the coefficients of drag and lift of -14.0% and -42% of the baseline values, respectively. Control techniques targeting other flow structures such as the C-pillar vortices and trailing wake proved less effective, producing a maximum reduction in drag and lift of -1.2% and -7%. The change in the surface pressure distribution reveals the impact of each flow control strategy on a targeted flow structure, and highlights the complex interaction between the salient flow features found in the wake of the Ahmed model. Areas of pressure recovery on the surface of the model observed for each control technique support the observed changes to the aerodynamic forces. The time averaged, volumetric wake is also reconstructed to characterize the baseline flow field and highlight the effect of control on the three dimensional structure of the near wake region. The results show that separation control has a measurable effect on the flow field including modifications of the locations, size, magnitude, and trajectory of the various structures which comprise the near wake. The observations give insight into desirable modifications and flow topology which lead to an optimal drag configuration for a particular vehicle geometry. 2018_Su_McNally_fsu_0071E_14507 Active Control of Wingtip Vortices Using Piezoelectric Actuated Winglets. Guha, Tufan Kumar, Kumar, Rajan (Professor of Mechanical Engineering), Liang, Zhiyong Richard, Oates, William, Alvi, Farrukh S., Florida State University, FAMU-FSU College of... Show moreGuha, Tufan Kumar, Kumar, Rajan (Professor of Mechanical Engineering), Liang, Zhiyong Richard, Oates, William, Alvi, Farrukh S., Florida State University, FAMU-FSU College of Engineering, Department of Mechanical Engineering Wingtip vortices develop at the tips of aircraft wings due to a pressure imbalance during the process of generating lift. These vortices significantly increase the total aerodynamic drag of an aircraft at high-lift flight conditions such as during take-off and landing. The long trailing vortices contain strong circulation and may induce rolling moments and lift losses on a trailing aircraft, making them a major cause for wake turbulence. A mandatory spacing between aircraft is administered by... Show moreWingtip vortices develop at the tips of aircraft wings due to a pressure imbalance during the process of generating lift. These vortices significantly increase the total aerodynamic drag of an aircraft at high-lift flight conditions such as during take-off and landing. The long trailing vortices contain strong circulation and may induce rolling moments and lift losses on a trailing aircraft, making them a major cause for wake turbulence. A mandatory spacing between aircraft is administered by civil aviation agencies to reduce the probability of hazardous wake encounters. These measures, while necessary, restrict the capacity of major airports and lead to higher wait times between take-off and landing of two aircraft. This poses a major challenge in the face of continuously increasing air traffic volume. Wingtip vortices are also known as a potent source of aerodynamic vibrations and noise. These negative effects have made the study of wingtip vortex attenuation a critical area of research. The problem of induced drag has been addressed with the development of wingtip device, like winglets. Tip devices diffuse the vortex at its very onset leading to lower induced drag. The problem of wake turbulence has been addressed in studies on vortex interactions and co-operative instabilities. These instabilities accelerate the process of vortex breakdown, leading to a lower lifetime in the wake. A few studies have tried to develop active mechanisms that can artificially excite these instabilities. The aim of the present study is to develop a device that can be used for both reducing induced drag and exciting wake instabilities. To accomplish this objective, an active winglet actuator has been developed with the help of piezoelectric Macro-Fiber Composite (MFC). The winglet is capable of oscillating about the main wing-section at desired frequency and amplitude. A passive winglet is a well-established drag reducing device. An oscillating winglet can introduce perturbations that can potentially lead to instabilities and accelerate the process of vortex breakdown. A half-body model of a generic aircraft configuration was fabricated to characterize and evaluate the performance of actuated winglets. Two winglet models having mean dihedral orientations of 0° and 75° were studied. The freestream velocity for these experiments was 20 m/s. The angle of incidence of the wing-section was varied between 0° and 8°. The Reynolds number based on the mid-chord length of the wing-section is 140000. The first part of the study consisted of a detailed structural characterization of the winglets at various input excitation and pressure loading conditions. The second part consisted of low speed wind tunnel tests to investigate the effects of actuation on the development of wingtip vortices at different angles of incidence. Measurements included static surface pressure distributions and Stereoscopic (ensemble and phase-locked) Particle Image Velocimetry (SPIV) at various downstream planes. Modal analysis of the fluctuations existing in the baseline vortex and those introduced by actuation is conducted with the help of Proper Orthogonal Decomposition (POD) technique. The winglet oscillations show bi-modal behavior for both structural and actuation modes of resonance. The oscillatory amplitude at these actuation modes increases linearly with the magnitude of excitation. During wind tunnel tests, fluid structure interactions lead to structural vibrations of the wing. The effect of these vibrations on the winglet oscillations decreases with the increase in the strength of actuation. At high input excitation, the actuated winglet is capable of generating controlled oscillations suitable for perturbing the vortex. The vortex associated with a winglet is stretched along its axis with multiple vorticity peaks. The center of the vortex core is seen at the root of the winglet while the highest vorticity levels are observed at the tip. The vortex core rotates and becomes more circular in shape while diffusing downstream. The shape, position, and strength of the vorticity peaks are found to vary periodically with winglet oscillation. Actuation is even capable of disintegrating the single vortex core into two vortices. The most energetic POD fluctuation modes, at the center of the baseline vortex core, correspond to vortex wandering at the initial downstream planes. At the farthest planes, the most energetic modes can be associated with core deformation. High energy fluctuations in the actuated vortex correspond to spatial oscillations and distortions produced by the winglet motion. FSU_SUMMER2017_Guha_fsu_0071E_14000 Active Flow Control and Global Stability Analysis of Separated Flow over a NACA 0012 Airfoil. Munday, Phillip M. (Phillip Michael), Taira, Kunihiko, Hussaini, M. Yousuff, Alvi, Farrukh S., Cattafesta, Louis N., Lin, Shangchao, Florida State University, College of... Show moreMunday, Phillip M. (Phillip Michael), Taira, Kunihiko, Hussaini, M. Yousuff, Alvi, Farrukh S., Cattafesta, Louis N., Lin, Shangchao, Florida State University, College of Engineering, Department of Mechanical Engineering The objective of this computational study is to examine and quantify the influence of fundamental flow control inputs in suppressing flow separation over a canonical airfoil. Most flow control studies to this date have relied on the development of actuator technology, and described the control input based on specific actuators. Taking advantage of a computational framework, we generalize the inputs to fundamental perturbations without restricting inputs to a particular actuator. Utilizing... Show moreThe objective of this computational study is to examine and quantify the influence of fundamental flow control inputs in suppressing flow separation over a canonical airfoil. Most flow control studies to this date have relied on the development of actuator technology, and described the control input based on specific actuators. Taking advantage of a computational framework, we generalize the inputs to fundamental perturbations without restricting inputs to a particular actuator. Utilizing this viewpoint, generalized control inputs aim to aid in the quantification and support the design of separation control techniques. This study in particular independently introduces wall-normal momentum and angular momentum to the separated flow using swirling jets through model boundary conditions. The response of the flow field and the surface vorticity fluxes to various combinations of actuation inputs are examined in detail. By closely studying different variables, the influence of the wall-normal and angular momentum injections on separated flow is identified. As an example, open-loop control of fully separated, incompressible flow over a NACA 0012 airfoil at α = 6° and $9° with Re = 23,000 is examined with large-eddy simulations. For the shallow angle of attack α = 6°, the small recirculation region is primarily affected by wall-normal momentum injection. For a larger separation region at α = 9°, it is observed that the addition of angular momentum input to wall-normal momentum injection enhances the suppression of flow separation. Reducing the size of the separated flow region significantly impacts the forces, and in particular reduces drag and increases lift on the airfoil. It was found that the influence of flow control on the small recirculation region (α = 6°) can be sufficiently quantified with the traditional coefficient of momentum. At α = 9°, the effects of wall-normal and angular momentum inputs are captured by modifying the standard definition of the coefficient of momentum, which successfully characterizes suppression of separation and lift enhancement. The effect of angular momentum is incorporated into the modified coefficient of momentum by introducing a characteristic swirling jet velocity based on the non-dimensional swirl number. With the modified coefficient of momentum, this single value is able to categorize controlled flows into separated, transitional, and attached flows. With inadequate control input (separated flow regime), lift decreased compared to the baseline flow. Increasing the modified coefficient of momentum, flow transitions from separated to attached and accordingly results in improved aerodynamic forces. Modifying the spanwise spacing, it is shown that the minimum modified coefficient of momentum input required to begin transitioning the flow is dependent on actuator spacing. The growth (or decay) of perturbations can facilitate or inhibit the influence of flow control inputs. Biglobal stability analysis is considered to further analyze the behavior of control inputs on separated flow over a symmetric airfoil. Assuming a spanwise periodic waveform for the perturbations, the eigenvalues and eigenvectors about a base flow are solved to understand the influence of spanwise variation on the development of the flow. Two algorithms are developed and validated to solve for the eigenvalues of the flow: an algebraic eigenvalue solver (matrix based) and a time-stepping algorithm. The matrix based approach is formulated without ever storing the matrices, creating a computationally memory efficient algorithm. Based on the matrix based solver, eigenvalues and eigenvectors are identified for flow over a NACA 0015 airfoil at Re = 200, $600, and $1,000. All three cases contain similar modes, although the growth rate of the leading eigenvalue is decreased with increasing Reynolds number. Three distinct types of modes are found, wake mode, steady mode, and modes of the continuous branch. While this method is limited in the range of Reynolds numbers, these results are used to validate the time-stepper approach. Increasing the Reynolds number to Re = 23,000 over a NACA 0012 airfoil, the time-stepper method is implemented due to rising computational cost of the matrix-based method. Stability analysis about the time-averaged flow is performed for spanwise wavenumbers of β = 1$, $10π, and $20π, which the latter two wavenumbers are representative of the spanwise spacing between the actuators. The largest spanwise wavelength (β = 1$) contained unstable modes that ranged from low to high frequency, and a particular unstable low-frequency mode corresponding to a frequency observed in the lift forces of the baseline large-eddy simulation. For the larger spanwise wavenumbers, β = 10π ($L_z/c = 0.2$) and $20π ($L_z/c = 0.1$), low-frequency modes were damped and only modes with $f > 5$ were unstable. These results help us gain further insight into the influence of the flow control inputs. Flow control is not implemented in a manner to directly excite specific modes, but does dictate the spanwise wavelengths that can be generated. Comparing the unstable eigenmodes at these two spacings, the larger spanwise spacing ($\beta = 10\pi$) had a greater growth rate for the majority of the unstable modes. The smaller spanwise spacing ($\beta = 20\pi$) has only a single unstable mode with a growth rate an order of magnitude smaller than $\beta = 10\pi$. With the aid of the increased growth rate, perturbations to the flow with a wider spacing become more effective by interacting with natural modes of the flow. Taking advantage of these natural modes allows for decreased input for the wider spanwise spacing. In conclusion, it was shown that the influence of wall-normal and angular momentum inputs on fully separated flow can adequately be described by the modified coefficient of momentum. Through further analysis and the development of a biglobal stability solver, spanwise spacing effects observed in the flow control study can be explained. The findings from this study should aid in the development of more intelligently designed flow control strategies and provide guidance in the selection of flow control actuators. FSU_2017SP_Munday_fsu_0071E_13086 Adapt and Prevail: New Applications of Rhythmic and Metric Analysis in Contemporary Metal Music. Garza, Jose Manuel, Clendinning, Jane Piper, Parks, John Will, Kraus, Joseph Charles, Richards, Mark C., Florida State University, College of Music Much recent scholarship on metal music has treated the repertoire through historical and ethnomusicological lenses. While the theoretical literature has engaged certain significant artists– particularly Meshuggah and Dream Theater—several important bands and aspects of the music have been overlooked. Especially significant is the relative lack of writing focusing on rhythmic and metric elements of this music which shape a large part of these genres' distinctive sounds. My goals for this... Show moreMuch recent scholarship on metal music has treated the repertoire through historical and ethnomusicological lenses. While the theoretical literature has engaged certain significant artists– particularly Meshuggah and Dream Theater—several important bands and aspects of the music have been overlooked. Especially significant is the relative lack of writing focusing on rhythmic and metric elements of this music which shape a large part of these genres' distinctive sounds. My goals for this dissertation are twofold. First, this document serves as a style study with regard to rhythm, meter, hypermeter, and phrase rhythm as heard in contemporary metal music (i.e., metal music from around the mid-1990s to the present). Second, I demonstrate how the ways in which contemporary metal artists manipulate rhythm, meter, hypermeter, and phrase rhythm introduce new concepts or extensions of existing modes of analyses. I problematize two metric devices, namely asymmetric meter and metric modulation, as they apply to popular music, demonstrating applications of my conceptual framework to the contemporary metal repertoire. In my analyses, I adapt existing methodologies by Lerdahl and Jackendoff (1983), Temperley (2001), Pearsall (1997), Rothstein (1989), and Krebs (1999), specifically Lerdahl and Jackendoff's and Temperley's preference-rule systems, Pearsall's durational set notation, Rothstein's descriptions of manipulations of hypermeter and phrase rhythm, and Krebs's two types of metrical dissonance. Although the sources on which I base aspects of this study deal largely with a different repertoire than mine, the similarities between common-practice Western art music and popular music (including contemporary metal) warrant a similar approach. Where the repertoires diverge—particularly with regard to harmonic syntax and the nuances of metrical dissonance—I suggest alternative methods that address the idiosyncrasies of contemporary metal music. Despite the limited body of works I use for this dissertation, I maintain that the analytical models I propose here are applicable to a wider range of popular music. Therefore, with this document, I contribute to the broader cause of rhythm and meter studies in popular music scholarship. FSU_FALL2017_Garza_fsu_0071E_14184 Advice and Discontent: Staging Identity through Legal Representation on the British Stage, 1660-1800. Cerniglia, Sarah Morrow, Burke, Helen M., Upchurch, Charles, Daileader, Celia R., Ward, Candace, Florida State University, College of Arts and Sciences, Department of English One of the key issues that arises when discussing the long eighteenth century is that of identity: self/individual, and group/national. Whereas recent critical work in both literary studies and historiography has concerned itself with the circumstances surrounding the long eighteenth century's fundamental shifts in conceptions of identity, much of this work overlooks the potential for identity to be relational, rather than either exterior or interior to an individual/group. This dissertation... Show moreOne of the key issues that arises when discussing the long eighteenth century is that of identity: self/individual, and group/national. Whereas recent critical work in both literary studies and historiography has concerned itself with the circumstances surrounding the long eighteenth century's fundamental shifts in conceptions of identity, much of this work overlooks the potential for identity to be relational, rather than either exterior or interior to an individual/group. This dissertation explores the relational nature of identity formation in the long eighteenth century by examining a literary genre and a character that depend upon relational interactions in order to sustain themselves: stage comedies and lawyers. Representative dramatic comedies by writers such as George Farquhar, Richard Cumberland, Thomas Lewis O'Beirne, William Wycherly, Christopher Bullock, Henry Fielding, John O'Keeffe, Colley Cibber, George Colman and David Garrick, and Samuel Foote, offer opportunities to study staged representations of lawyers whose clients' issues essentially become those of identity formation. This dissertation argues that, for many characters struggling to establish an identity that can participate in a national British identity, the key to such participation lies in access to real property; when access to real property is denied them, they must turn to someone who is himself struggling to establish an identity. At this point, lawyers in eighteenth-century British comedies become much more than stock characters or mere comic relief. Instead, the lawyer—often ostracized and derided himself—becomes a mediator not just of individual identity, but of "Britishness." Careful attention to lawyers' success representing different types of clients struggling to establish identities through access to real property highlights both the power of relational identity formation and the key roles that arguably minor characters have in arbitrating issues of national significance. FSU_2017SP_Cerniglia_fsu_0071E_13700 Aeroacoustic Characteristics of Supersonic Impinging Jets. Worden, Theodore James, Alvi, Farrukh S., Shih, Chiang, Liang, Zhiyong Richard, Collins, Emmanuel G., Gustavsson, Jonas, Kumar, Rajan (Professor of Mechanical Engineering),... Show moreWorden, Theodore James, Alvi, Farrukh S., Shih, Chiang, Liang, Zhiyong Richard, Collins, Emmanuel G., Gustavsson, Jonas, Kumar, Rajan (Professor of Mechanical Engineering), Michalis, Krista, Florida State University, College of Engineering, Department of Mechanical Engineering High-speed impinging jets are often generated by the propulsive systems of aerospace launch vehicles and tactical aircraft. In many instances, the presence of these impinging jets creates a hazard for flight operations personnel due to the extremely high noise levels and unsteady loads produced by fluid-surface interaction. In order to effectively combat these issues, a fundamental understanding of the flow physics and dominant acoustic behavior is essential. There are inherent challenges in... Show moreHigh-speed impinging jets are often generated by the propulsive systems of aerospace launch vehicles and tactical aircraft. In many instances, the presence of these impinging jets creates a hazard for flight operations personnel due to the extremely high noise levels and unsteady loads produced by fluid-surface interaction. In order to effectively combat these issues, a fundamental understanding of the flow physics and dominant acoustic behavior is essential. There are inherent challenges in performing such investigations, especially with the need to simulate the flowfield under realistic operational conditions (temperature, Mach number, etc.) and in configurations that are relevant to full-scale application. A state-of-the-art high-temperature flow facility at Florida State University has provided a unique opportunity to experimentally investigate the high-speed impinging jet flowfield at application-relevant conditions. Accordingly, this manuscript reports the findings of several experimental studies on high-temperature supersonic impinging jets in multiple configurations. The overall objective of these studies is to characterize the complex relationship between the hydrodynamic and acoustic fields. A fundamental parametric investigation has been performed to document the flowfield and acoustic characteristics of an ideally-expanded supersonic air jet impinging onto a semi-infinite flat plate at ambient and heated jet conditions. The experimental program has been designed to span a widely-applicable geometric parameter space, and as such, an extensive database of the flow and acoustic fields has been developed for impingement distances in the range 1d to 12d, impingement angles in the range 45 degrees to 90 degrees, and jet stagnation temperatures from 289K to 811K (TTR=1.0 to 2.8). Measurements include point-wise mean and unsteady pressure on the impingement surface, time-resolved shadowgraphy of the flowfield, and fully three-dimensional near field acoustics. Aside from detailed documentation of the flow and acoustic fields, this work aims to develop a physical understanding of the noise sources generated by impingement. Correlation techniques are employed to localize and quantify the spatial extent of broadband noise sources in the near-impingement region and to characterize their frequency content. Additionally, discrete impingement tones are documented for normal and oblique incidence angles, and an empirical model of the tone frequencies has been developed using velocity data extracted from time-resolved shadowgraphy together with a simple modification to the conventional feedback formula to account for non-normal incidence. Two application-based studies have also been undertaken. In simulating a vertical take-off and landing aircraft in hover, the first study of a normally-impinging jet outfitted with lift-plate characterizes the flow-acoustic interaction between the high-temperature jet and the underside of an aircraft and documents the effectiveness of an active flow control technique known as `steady microjet injection' to mitigate high noise levels and unsteady phenomena. The second study is a detailed investigation of the jet blast deflector/carrier deck configuration aimed at gaining a better understanding of the noise field generated by a jet operating on a flight deck. The acoustic directionality and spectral characteristics are documented for a model-scale carrier deck with particular focus on locations that are pertinent to flight operations personnel. FSU_SUMMER2017_Worden_fsu_0071E_13997 An Aeroacoustic Characterization of a Multi-Element High-Lift Airfoil. Pascioni, Kyle A., Cattafesta, Louis N., Sussman, Mark, Alvi, Farrukh S., Xu, Cheryl, Choudhari, Meelan, Florida State University, College of Engineering, Department of... Show morePascioni, Kyle A., Cattafesta, Louis N., Sussman, Mark, Alvi, Farrukh S., Xu, Cheryl, Choudhari, Meelan, Florida State University, College of Engineering, Department of Mechanical Engineering The leading edge slat of a high-lift system is known to be a large contributor to the overall radiated acoustic field from an aircraft during the approach phase of the flight path. This is due to the unsteady flow field generated in the slat-cove and near the leading edge of the main element. In an effort to understand the characteristics of the flow-induced source mechanisms, a suite of experimental measurements has been performed on a two-dimensional multi-element airfoil, namely, the MD... Show moreThe leading edge slat of a high-lift system is known to be a large contributor to the overall radiated acoustic field from an aircraft during the approach phase of the flight path. This is due to the unsteady flow field generated in the slat-cove and near the leading edge of the main element. In an effort to understand the characteristics of the flow-induced source mechanisms, a suite of experimental measurements has been performed on a two-dimensional multi-element airfoil, namely, the MD-30P30N. Particle image velocimetry provide mean flow field and turbulence statistics to illustrate the differences associated with a change in angle of attack. Phase-averaged quantities prove shear layer instabilities to be linked to narrowband peaks found in the acoustic spectrum. Unsteady surface pressure are also acquired, displaying strong narrowband peaks and large spanwise coherence at low angles of attack, whereas the spectrum becomes predominately broadband at high angles. Nonlinear frequency interaction is found to occur at low angles of attack, while being negligible at high angles. To localize and quantify the noise sources, phased microphone array measurements are per- formed on the two dimensional high-lift configuration. A Kevlar wall test section is utilized to allow the mean aerodynamic flow field to approach distributions similar to a free-air configuration, while still capable of measuring the far field acoustic signature. However, the inclusion of elastic porous sidewalls alters both aerodynamic and acoustic characteristics. Such effects are considered and accounted for. Integrated spectra from Delay and Sum and DAMAS beamforming effectively suppress background facility noise and additional noise generated at the tunnel wall/airfoil junction. Finally, temporally-resolved estimates of a low-dimensional representation of the velocity vector fields are obtained through the use of proper orthogonal decomposition and spectral linear stochastic estimation. An estimate of the pressure field is then extracted by Poissons equation. From this, Curles analogy projects the time-resolved pressure forces on the airfoil surface to further establish the connection between the dominating unsteady flow structures and the propagated noise. FSU_2017SP_Pascioni_fsu_0071E_13776 Affective Labor Power in Sport Management: A Political Economic Analysis of Internships in the Sports Industry. Hawzen, Matthew G., Newman, Joshua I., Giardina, Michael D., Xue, Hanhan, Florida State University, College of Education, Department of Sport Management Internships are an integral part of the job-training regimen for college students in the United States today. The prevalence of internships in higher education and the U.S. economy is often justified by the compelling idea that internships provide mutual value to universities, students, and employers (Becker, 1962; Coco, 2000). The internship system, however, has become the subject of litigation in court, politicized as a regime of wage theft, and critiqued for its contribution to the... Show moreInternships are an integral part of the job-training regimen for college students in the United States today. The prevalence of internships in higher education and the U.S. economy is often justified by the compelling idea that internships provide mutual value to universities, students, and employers (Becker, 1962; Coco, 2000). The internship system, however, has become the subject of litigation in court, politicized as a regime of wage theft, and critiqued for its contribution to the widening gulf between rich and poor in the United States (Perlin, 2011b). It is within this context that internships have become a core component of the academic field of sport management. Sport management has used internships as a preparatory practice since its inception in the late 1960s. Founded on the idea of training a managerial class of workers for the sports industry, sport management has grown from one program in 1966 to over 400 today. Sport management scholars argue that such growth comes from 1) the burgeoning sports industry's demand for a trained workforce and 2) from the more and more students who want to enroll in the degree programs (Chelladurai, 2017; Masteralexis, Barr, & Hums, 2011). Despite the effective demand amongst students and the labor demand from the industry, scholars are describing the labor market as over-saturated and highly competitive (DeLuca & Braunstein-Minkove, 2016). The major consequences are an uncertain job market and suboptimal labor conditions for interns and graduates. This dissertation examines the political economy of internships within and between sport management and the sports industry and explores in this context the labor power, or productive subjectivities, of sport management majors going through the internship process. I performed in-depth semi-structured to unstructured interviews with 33 sport management majors who were at three different points in the internship process (before, during, and after). The interviews were conducted to understand the production of motivations and capacities to work in sport (or the demand for sport management); the experience of being entangled in labor market competition; the expectations for, and experiences of, interning; and the formative, and ongoing, role that sport (fandom and athletic participation) plays in the lives and labor of interns jockeying for positions in the sports industry. In my analysis, I discuss the ways in which my respondents became subjects of social reproduction between sport and capitalism and subjected to affective conditions of exploitation. I provide a critique of dominant internship orthodoxy, the function of internships in the sports industry, and the active role sport management plays in reproducing conditions of exploitation. And I illustrate how the contradictions of internships under capital give rise to passion, love, hope, and optimism as irrational yet core characteristics of the sport management workforce. After having fleshed out myriad issues with internships, I conclude with a discussion about what we can do about internships in sport management to improve the labor conditions for future interns. 2018_Su_Hawzen_fsu_0071E_14636 Affine Dimension of Smooth Curves and Surfaces. Williams, Ethan Randy, Oberlin, Richard, Ormsbee, Michael J., Reznikov, Alexander, Bauer, Martin, Florida State University, College of Arts and Sciences, Department of Mathematics Our aim is to study the affine dimension of some smooth manifolds. In Chapter 1, we review the notions of Minkowski and Hausdorff dimension, and compare them with the lesser studied affine dimension. In Chapter 2, we focus on understanding the affine dimension of curves. In Section 2.1, we review the existing results for the affine dimension of a strictly convex curve in the plane, and in Section 2.2, we classify the smooth curves in ℝn based on affine dimension. In Chapter 3, we classify the... Show moreOur aim is to study the affine dimension of some smooth manifolds. In Chapter 1, we review the notions of Minkowski and Hausdorff dimension, and compare them with the lesser studied affine dimension. In Chapter 2, we focus on understanding the affine dimension of curves. In Section 2.1, we review the existing results for the affine dimension of a strictly convex curve in the plane, and in Section 2.2, we classify the smooth curves in ℝn based on affine dimension. In Chapter 3, we classify the smooth hypersurfaces in ℝ3 with non-negative Gaussian curvature based on affine dimension, and in Chapter 4 we provide a lower and upper bound for the affine dimension of smooth, convex hypersurfaces in ℝn. 2018_Sp_Williams_fsu_0071E_14512 After Essentialism: Possibilities in Phenomenology of Religion. Lupo, Joshua S., Kavka, Martin, Williamson, George S., Kalbian, Aline H., Kelsay, John, Florida State University, College of Arts and Sciences, Department of Religion Scholars of religion and the humanities more often than not claim to engage in critical inquiry. Too often, however, these claims are not adequately justified. To resolve this problem, this dissertation turns to the philosophical movement known as phenomenology. Inaugurated by Edmund Husserl and developed by Martin Heidegger, this philosophical movement, at its best, has focused on how our consciousness of the world is structured by our intentional relation with it. At its worst, this... Show moreScholars of religion and the humanities more often than not claim to engage in critical inquiry. Too often, however, these claims are not adequately justified. To resolve this problem, this dissertation turns to the philosophical movement known as phenomenology. Inaugurated by Edmund Husserl and developed by Martin Heidegger, this philosophical movement, at its best, has focused on how our consciousness of the world is structured by our intentional relation with it. At its worst, this tradition of philosophy has supported essentialism, that is, the belief that we can bracket our social, political, and historical contexts and in doing so attain unchanging knowledge of our world. The phenomenological method has a complex history within the study of religion. Phenomenologists of religion believed that they could discern a common essence behind different religious traditions. The phenomenological approach is no longer popular among scholars in the study of religion. Russell McCutcheon, for example, claims that the phenomenological approach has allowed scholars to implicitly protect religious traditions, and indeed the very category of religion, from criticism. For McCutcheon, when scholars essentialize religion, they place it outside the social and political realm, and make it immune from critique. On McCutcheon's account, however, it is not simply phenomenology of religion, but the phenomenological method itself, that is to blame for this lack of critical rigor. To examine the plausibility of this claim, the first three chapters of this dissertation examine the work of three of the most widely cited phenomenologists of religion—Rudolf Otto, Gerardus van der Leeuw, and Mircea Eliade—and show how their work does and does not share in the same philosophical assumptions as Husserl and Heidegger. I contend that much of their work does suffer from the problem of essentialism that McCutcheon identifies. I also contend that some of the blame for this should be pinned on Husserl, for whom essential knowledge remained an important aspiration. This, however, does not mean that all phenomenology should be abandoned. In the fourth chapter, I argue that existential phenomenology not only allows for critical analysis, but also offers a more plausible grounding for critique. McCutcheon's method seeks to fix our knowledge of the world by arguing that our claims about it, including our claims about religion, are constituted by power relations. But if this were the case, scholarship itself would simply be an expression of power, and for that reason its critiques could never be evaluated using criteria established by reason. Through an examination of Heidegger's early lectures and Being and Time, I provide a justification for a critical approach to examining religious traditions. What makes Heidegger's account useful, I contend, is his analysis of the formation of a subject who can take up and critique the norms that govern his or her life, not by placing him- or herself outside of his or her tradition, but by taking up a place within it. This grounding makes possible a non-essentialist approach to critique. It takes the content of our lives to be made up of the historically, socially, and politically contingent norms that govern us. But it also offers an account of how we can take up and critique those norms. In the final chapter of the dissertation, I cash out this approach's usefulness by turning to recent debates surrounding natural law. As opposed to some approaches to natural law reasoning which claim that there are essential moral and ethical goods that make up the natural law and transcend our contexts, Jean Porter and Vincent Lloyd argue for a tradition-based approach to natural law that takes the content of the natural law to be dependent upon the social and historical contexts in which proponents of natural law locate themselves. I argue that John Finnis and Germain Grisez, along with two critics of Finnis and Grisez, Lisa Cahill and Cristina Traina, desire to fix the content of the natural law in an essentialist manner, and that Porter and Lloyd offer a more compelling account of natural law reasoning that is amenable to Heidegger's existential phenomenology. This chapter thus shows how the previously proposed phenomenological account of selfhood can be used to critique a religious tradition without fixing that tradition as either a manifestation of a sacred reality or of power. The dissertation ends with a reflection on the role of irony in the study of religion, arguing that irony should be used by scholars to challenge the status quo, but should not be used cynically to suggest that there is no way to move beyond it. 2018_Sp_Lupo_fsu_0071E_14397 Against Reason a Defense of Moderate Normative Skepticism. Vadakin, Aron, Mele, Alfred R., Kavka, Martin, Rawling, Piers, Clarke, Randolph K., Florida State University, College of Arts and Sciences, Department of Philosophy This dissertation both surveys contemporary work in metanormativity and argues for a position that I call moderate normative skepticism. I begin by evaluating efforts to characterize the normative domain and conclude that while some normative concepts and properties are amenable to naturalistic programs of reduction and analysis, other normative concepts and properties are not. I proceed to clarify accounts of reasons, reasoning, and rationality; this establishes argumentative room to... Show moreThis dissertation both surveys contemporary work in metanormativity and argues for a position that I call moderate normative skepticism. I begin by evaluating efforts to characterize the normative domain and conclude that while some normative concepts and properties are amenable to naturalistic programs of reduction and analysis, other normative concepts and properties are not. I proceed to clarify accounts of reasons, reasoning, and rationality; this establishes argumentative room to maneuver for my moderate normative skepticism. Next, I evaluate moral error theories, which I count as close cousins of my own thesis, and I note how these error theories have more profound implications than their authors realize. I claim that, understood properly, these error theories extend to the domain of normative reasons in general. I accept and defend the extension of error theory as a viable position. In the final chapter of my dissertation, I defend my position against charges of self-defeat and attempt to anticipate and defuse potential criticisms. FSU_FALL2017_Vadakin_fsu_0071E_14258 An Agential Exploration of Tragedy and Irony in Post-1945 Orchestral Works. Lee, Richard C., Kraus, Joseph Charles, Gontarski, S. E., Buchler, Michael Howard, Jones, Evan Allan, Florida State University, College of Music This analytic dissertation explores tragic and ironic narratives in post-1945 orchestral works through the lens of musical agency and critical theory. For the purposes of this study, I define musical narrative as any sequencing of musical events mediated by agency in which a musical story emerges that underscores the sequence of musical events. My definition and overall methodology for musical narrative follows the work of Lawrence Kramer, Michael Klein, and Byron Almén as I explore tragedy... Show moreThis analytic dissertation explores tragic and ironic narratives in post-1945 orchestral works through the lens of musical agency and critical theory. For the purposes of this study, I define musical narrative as any sequencing of musical events mediated by agency in which a musical story emerges that underscores the sequence of musical events. My definition and overall methodology for musical narrative follows the work of Lawrence Kramer, Michael Klein, and Byron Almén as I explore tragedy and irony in three large-scale compositions composed after World War II. From Kramer, I break down the narrative process into three components: (1) narrativity, what generates a narrative account; (2) narratography, the discoursing of a narrative; and (3) narrative, the musical story that accompanies the discourse. From Klein (and others), I borrow concepts from intertextuality, critical theory, literary theory, and philosophy to inform these musical stories. Finally, from Almén, I take the tragic narrative archetype as an organizational analytical tool in my analyses. The primary narrativity in each analysis is a form of musical agency, and each analysis questions the role of agency in the interpretation of tragic musical stories in these post-1945 works. For the purposes of this study, musical agency is defined as a perceived entity's ability to interact with its environment, often emerging as personification of musical events. Scholars differ on how agencies interact: whether they have volition and intention, whether they arise as a singular subjectivity, what kind of space they inhabit, etc. While some depict musical agency as a messy endeavor, others aim to provide a structure for its interpretation. As an organizing principle, I use Seth Monahan's four agential classes to focus my discussion. Each chapter generally addresses one of the following: (1) the individuated element (notes, harmonies, themes), (2) the work-persona (the personification of the whole piece), (3) the fictional composer (the being postulated by the analyst as the controlling author of a work), and (4) the analyst. Hence, after an introduction, each of the succeeding chapters of this dissertation focuses on one agential class as a guiding narrativity. I bring narrative and agency together by borrowing ideas from literary theory, psychoanalysis, and philosophy. In chapter two I explore the agential class of the work-persona in Krzysztof Penderecki's Third Symphony, in which the work-persona laments and dies twice, yielding a scenario described by Slavoj Žižek as the "two deaths." In that reading one death is real and the other is symbolic, and the ordering of the deaths in Penderecki's symphony leads the analyst to read the scenario tragically. The two deaths generate the interpretation of death as a master signifier, borrowing from the work of Jacques Lacan. The master signifier then guides the analytical decisions that are made. In my analysis of Thomas Adès's Asyla in chapter three, fictional composer agencies are locked in a power struggle, leading to a reading that evokes Michel Foucault's conceptualizations of power and panopticism. Investigating the individuated element agency in chapter four, I posit that George Rochberg's conservative employment of serial technique in his Symphony No. 2 leads to a reading of belatedness that supplements its commentary on the tragedy of the Second World War. My final chapter serves as an analysis of the analyst, the highest ranking of Monahan's agential classes. Here I describe the three analyses of the preceding chapters as three component parts that contribute to an analyst's machine, following the philosophy of Gilles Deleuze. The analyst's machine serves as a departure point for an exploration of analytical subjectivity. I begin that inquiry by positioning the analyst in a virtual space (as opposed to an actual space). I next parallel Lacan's formulation of the identity with the analytical process, tracing how analysts build their identity through a combination of received components, resulting in a fractured subjectivity. Finally, I bring back the idea of fiction (from the fictional composer agency) to establish a fictional analyst who is a Deleuzian assemblage of refrains and avatars that carries out an analysis. The goal of this dissertation is to uncover narrative approaches to post-1945 music by combining familiar analytical tools with interdisciplinary methodologies. Focusing on a certain agential class as a narrativity, the individual analyses of tragic works by Penderecki, Adès, and Rochberg lead to a reconsideration of the analyst in the concluding chapter—and that chapter serves as a starting point for future analytical and theoretical endeavors. FSU_2017SP_Lee_fsu_0071E_13774 Aging in Activity Spaces: Understanding the Automobility of Aging Populations. Wood, Brittany S. (Brittany Suzanne), Horner, Matthew I. (Matthew Ian), Brown, Jeffrey R., Uejio, Christopher K., Folch, David C., Florida State University, College of Social... Show moreWood, Brittany S. (Brittany Suzanne), Horner, Matthew I. (Matthew Ian), Brown, Jeffrey R., Uejio, Christopher K., Folch, David C., Florida State University, College of Social Sciences and Public Policy, Department of Geography The proportion of individuals aged 65 and over is growing at an astronomical rate in the United States, and some estimate that this demographic age group will double by the year 2025. Aging adults are primarily dependent on the personal automobile as their main source of transportation. Older adults and adults nearing retirement age also tend to reside in suburban neighborhoods and rely heavily on personal vehicles. Since most of the United States is characterized by automobile dependent... Show moreThe proportion of individuals aged 65 and over is growing at an astronomical rate in the United States, and some estimate that this demographic age group will double by the year 2025. Aging adults are primarily dependent on the personal automobile as their main source of transportation. Older adults and adults nearing retirement age also tend to reside in suburban neighborhoods and rely heavily on personal vehicles. Since most of the United States is characterized by automobile dependent suburbanization, where the majority of development is suburban low-density sprawl, this may become problematic for aging populations who may be uncomfortable driving longer distances and making more trips. These trends invite the question of whether the deck is stacked against individuals approaching retirement age (50-64) and aging populations (65 and up). This study examines aging populations' mobility and determines whether they have different travel patterns than their younger cohorts. Additionally, this investigation explores whether or not travel patterns across age groups result in differential access to particular goods and services, as well as differences in travel environment characteristics in a metropolitan area. This research proposes an approach based on Time Geographic Density Estimation (TGDE) to identify activity spaces across different age cohorts in order to identify differences in the mobility and travel behavior of aging adults. TGDE is an established technique in the literature, which blends the notion of activity spaces with the computation of probabilistic potential path trees along a transportation system. In this way it establishes an 'extent' or overall mapping of the activity space of an individual, but is able to further refine that extent to identify the most likely places they are able to visit within that geography. Data on origin and destination trips and travel times are taken from the National Household Travel Survey (NHTS) Florida add-on for the study area of Orlando Metropolitan Statistical Area (MSA). Transportation is an important consideration in planning for aging populations, and analyzing differences in how older adults travel compared to their younger counterparts can offer insight into the diverse needs of this group. FSU_SUMMER2017_Wood_fsu_0071E_13926 The Aging Inmate Crisis: Institutional Adjustment and Post-Prison Outcome Differences between Older and Younger Prisoners. Scaggs, Samuel J. A., Bales, William D., Radey, Melissa, Mears, Daniel P., Blomberg, Thomas G., Florida State University, College of Criminology and Criminal Justice, College of... Show moreScaggs, Samuel J. A., Bales, William D., Radey, Melissa, Mears, Daniel P., Blomberg, Thomas G., Florida State University, College of Criminology and Criminal Justice, College of Criminology and Criminal Justice In the past two decades, the older prisoner population in the U.S. has experienced unprecedented growth (Carson & Sabol, 2016; Scaggs & Bales, 2016). In fact, older prisoners represent the fastest growing inmate population (Carson & Sabol, 2016). This growth has become an important policy concern for government officials and correctional administrators because these prisoners are substantially more expensive to incarcerate and less likely to reoffend compared to younger prisoners (Chettiar et... Show moreIn the past two decades, the older prisoner population in the U.S. has experienced unprecedented growth (Carson & Sabol, 2016; Scaggs & Bales, 2016). In fact, older prisoners represent the fastest growing inmate population (Carson & Sabol, 2016). This growth has become an important policy concern for government officials and correctional administrators because these prisoners are substantially more expensive to incarcerate and less likely to reoffend compared to younger prisoners (Chettiar et al., 2012). Older prisoners are more fiscally demanding to correctional systems due to healthcare and special housing considerations (Chettiar et al., 2012; Nowotny et al., 2015; Lemieux, Dyeson, & Castiglione, 2002; Linder & Meyers, 2007; Reimer, 2008). Older prisoners also represent a diverse population comprised of different criminal history profiles. While many prisoners are first time servers in old age, others are chronic offenders who have been in and out of prison multiple times (Beckett, Peternelj-Taylor, & Johnson, 2003). However, there is a void in prior literature regarding differences in the in-prison adaptation and post-prison reentry experiences among these inmates based on being a first time server and alternative definitions of what constitutes being an 'older prisoner. The current study seeks to fill two gaps in the prior literature on older prisoners. First, it will assess how older inmates differ from younger inmates in terms of in-prison adjustment and post-prison outcomes. Previous research studies find that older prisoners are less likely to engage in most types of prison misconduct (Blowers & Blevins, 2015) and to reoffend after prison release relative to their younger counterparts (Durose et al., 2014). What is less documented in prior studies is whether the employment prospects for older ex-convicts differ from those among younger prisoners and the extent to which finding work may, in turn, affect recidivism. Second, this study highlights the heterogeneity that exists among older versus younger inmates in their prison adaptation and reentry outcomes based on age and criminal history. A large percentage of older prisoners have never been previously incarcerated in prison. The Florida Department of Corrections 2013-2014 Annual Report shows that 46.2 percent of prisoners age 50 or older were committed to prison for the first time (Florida Department of Corrections, 2014). Prior research suggests that first time older prisoners may have an especially adverse response, and ultimately adjustment, to their commitment to prison which is manifested through institutional rule violations in the presence of family conflict, suicidal thoughts, depression, and fear of death (Aday, 1994; Leigey, 2015). This study uses data from a release cohort of former prisoners in Florida from 2004 to 2011 to examine differences between younger versus older prisoners. The data include institutional measures, pre-prison employment and criminal histories, and post-prison employment and recidivism information to examine differences in prison adjustment and post-release outcomes among different age groups and being a first time server among older versus younger inmates. By examining the effects of alternative age definitions on three primary outcomes—(1) prison misconduct, (2) post-prison employment, and (3) recidivism—this study contributes to prior literatures on gerontology, prison management, age stratification of post-prison employment opportunities, and recidivism. This study's focus on using old age as a key variable for explaining in-prison and reentry process outcomes is pertinent to a broader study of gerontology because it addresses important issues faced by a special subset of older adults within society. This study also contributes to the current literature on crime over the life course by assessing if and when older inmates are likely to find short-term employment and recidivate. FSU_2017SP_Scaggs_fsu_0071E_13795 Ain't No Sunshine: The Political Economy of Florida's Fight for Solar. Huebner, Alex, Kazmer, Michelle M., Opel, Andy, Graves, Brian, Florida State University, College of Communication and Information, School of Communication This dissertation interrogates how Florida's major electric utility companies actively suppressed the nascent solar energy industry in their effort to consolidate solar energy production into their hands during the 2015-2016 election cycle. Along with this, newspaper coverage of this issue was analyzed to determine how the fight was presented to the public and whether prevailing commercial pressures that influence the news production process affected the coverage of this issue. Finally,... Show moreThis dissertation interrogates how Florida's major electric utility companies actively suppressed the nascent solar energy industry in their effort to consolidate solar energy production into their hands during the 2015-2016 election cycle. Along with this, newspaper coverage of this issue was analyzed to determine how the fight was presented to the public and whether prevailing commercial pressures that influence the news production process affected the coverage of this issue. Finally, audience commentary about this issue was explored to determine how Facebook users made sense of this issue and whether the commentary reflected the prominent themes that were also present in the news coverage. Results highlight the economic and political ties between the utility companies and their support network as well as the solar supporters and their affiliated network that squared off in this fight. Additionally, findings reveal that commercial pressures to the news production process resulted in news coverage that portrayed this issue this to the public from a small handful of viewpoints, limiting the range of perspectives from which this issue may legitimately be discussed. Furthermore, results indicate that Facebook users who commented on this issue largely reflected the same perspectives and concerns that were present in the news coverage. Final conclusions and recommendations for changes to the news production process are provided. 2018_Sp_Huebner_fsu_0071E_14429 Algorithmic Lung Nodule Analysis in Chest Tomography Images: Lung Nodule Malignancy Likelihood Prediction and a Statistical Extension of the Level Set Image Segmentation Method. Hancock, Matthew C. (Matthew Charles), Magnan, Jeronimo Francisco, Duke, D. W., Hurdal, Monica K., Mio, Washington, Florida State University, College of Arts and Sciences,... Show moreHancock, Matthew C. (Matthew Charles), Magnan, Jeronimo Francisco, Duke, D. W., Hurdal, Monica K., Mio, Washington, Florida State University, College of Arts and Sciences, Department of Mathematics Lung cancer has the highest mortality rate of all cancers in both men and women in the United States. The algorithmic detection, characterization, and diagnosis of abnormalities found in chest CT scan images can aid radiologists by providing additional medically-relevant information to consider in their assessment of medical images. Such algorithms, if robustly validated in clinical settings, carry the potential to improve the health of the general population. In this thesis, we first give an... Show moreLung cancer has the highest mortality rate of all cancers in both men and women in the United States. The algorithmic detection, characterization, and diagnosis of abnormalities found in chest CT scan images can aid radiologists by providing additional medically-relevant information to consider in their assessment of medical images. Such algorithms, if robustly validated in clinical settings, carry the potential to improve the health of the general population. In this thesis, we first give an analysis of publicly available chest CT scan annotation data, in which we determine upper bounds on expected classification accuracy when certain radiological features are used as inputs to statistical learning algorithms for the purpose of inferring the likelihood of a lung nodule as being either malignant or benign. Second, a statistical extension of the level set method for image segmentation is introduced and applied to both synthetically-generated and real three-dimensional image volumes of lung nodules in chest CT scans, obtaining results comparable to the current state-of-the-art on the latter. 2018_Sp_Hancock_fsu_0071E_14427 Algorithms for Solving Linear Differential Equations with Rational Function Coefficients. Imamoglu, Erdal, van Hoeij, Mark, van Engelen, Robert, Agashe, Amod S. (Amod Sadanand), Aldrovandi, Ettore, Aluffi, Paolo, Florida State University, College of Arts and Sciences... Show moreImamoglu, Erdal, van Hoeij, Mark, van Engelen, Robert, Agashe, Amod S. (Amod Sadanand), Aldrovandi, Ettore, Aluffi, Paolo, Florida State University, College of Arts and Sciences, Department of Mathematics This thesis introduces two new algorithms to find hypergeometric solutions of second order regular singular differential operators with rational function or polynomial coefficients. Algorithm 3.2.1 searches for solutions of type: exp(∫ r dx) ⋅ ₂F₁ (a₁,a₂;b₁;f) and Algorithm 5.2.1 searches for solutions of type exp(∫ r dx) (r₀ ⋅ ₂F₁(a₁,a₂;b₁;f) + r₁ ⋅ ₂F´₁ (a₁,a₂;b₁;f)) where f, r, r₀, r₁ ∈ ℚ̅(̅x̅)̅ and a₁,a₂,b₁ ∈ ℚ and denotes the Gauss hypergeometric function. The algorithms use modular... Show moreThis thesis introduces two new algorithms to find hypergeometric solutions of second order regular singular differential operators with rational function or polynomial coefficients. Algorithm 3.2.1 searches for solutions of type: exp(∫ r dx) ⋅ ₂F₁ (a₁,a₂;b₁;f) and Algorithm 5.2.1 searches for solutions of type exp(∫ r dx) (r₀ ⋅ ₂F₁(a₁,a₂;b₁;f) + r₁ ⋅ ₂F´₁ (a₁,a₂;b₁;f)) where f, r, r₀, r₁ ∈ ℚ̅(̅x̅)̅ and a₁,a₂,b₁ ∈ ℚ and denotes the Gauss hypergeometric function. The algorithms use modular reduction, Hensel lifting, rational function reconstruction, and rational number reconstruction to do so. Numerous examples from different branches of science (mostly from combinatorics and physics) showed that the algorithms presented in this thesis are very effective. Presently, Algorithm 5.2.1 is the most general algorithm in the literature to find hypergeometric solutions of such operators. This thesis also introduces a fast algorithm (Algorithm 4.2.3) to find integral bases for arbitrary order regular singular differential operators with rational function or polynomial coefficients. A normalized (Algorithm 4.3.1) integral basis for a differential operator provides us transformations that convert the differential operator to its standard forms (Algorithm 5.1.1) which are easier to solve. FSU_SUMMER2017_Imamoglu_fsu_0071E_13942 Altered Nucleosome Positions at Transcription Start Sites in Maize Haplotypes and Mutants of Putative Chromatin Remodelers. Stroud, Linda Kozma, McGinnis, Karen M., Hurt, Myra M., Bass, Hank W., Chadwick, Brian P., Dennis, Jonathan Hancock, Florida State University, College of Arts and Sciences,... Show moreStroud, Linda Kozma, McGinnis, Karen M., Hurt, Myra M., Bass, Hank W., Chadwick, Brian P., Dennis, Jonathan Hancock, Florida State University, College of Arts and Sciences, Department of Biological Science Chromatin remodelers alter DNA-histone interactions in eukaryotic organisms, and have been well characterized in yeast and Arabidopsis. While there are maize proteins with similar domains as known remodelers, the ability of the maize proteins to alter nucleosome position has not been reported. Mutant alleles of genes encoding several maize proteins (RMR1, CHR101, CHR106, CHR127, CHR156, CHB102, and CHR120) with similar functional domains to known chromatin remodelers were identified. Altered... Show moreChromatin remodelers alter DNA-histone interactions in eukaryotic organisms, and have been well characterized in yeast and Arabidopsis. While there are maize proteins with similar domains as known remodelers, the ability of the maize proteins to alter nucleosome position has not been reported. Mutant alleles of genes encoding several maize proteins (RMR1, CHR101, CHR106, CHR127, CHR156, CHB102, and CHR120) with similar functional domains to known chromatin remodelers were identified. Altered expression of Chr101, Chr106, Chr127, Chr156, Chb102, and Chr120 was demonstrated in plants homozygous for the mutant alleles. These mutant genotypes were subjected to nucleosome position analysis to determine if misregulation of putative maize chromatin proteins would lead to altered DNA-histone interactions. Nucleosome position changes were observed in plants homozygous for chr101, chr106, chr127, chr156, chb102, and chr120 mutant alleles, suggesting that CHR101, CHR106, CHR127, CHR156, CHB102, and CHR120 may affect chromatin structure. The role of RNA polymerases in altering DNA-histone interactions was also tested. Changes in nucleosome position were demonstrated in homozygous mop2-1 individuals. These changes were demonstrated at the b1 tandem repeats and at newly identified loci. While the α-amanitin-inhibited RNA polymerase II demonstrated reduced expression of an RNA polymerase II transcribed gene, no changes in nucleosome position were detected in the α-amanitin-treated plants. Additionally, differential DNA-histone interactions and altered expression of putative chromatin remodelers in different maize haplotypes suggest a role for differentially expressed chromatin proteins in haplotype-specific variation. FSU_SUMMER2017_Stroud_fsu_0071E_13987 Ambitious Instruction in Undergraduate Biology Laboratories. Strimaitis, Anna Margaret, Southerland, Sherry A., Underwood, Nora, Andrews-Larson, Christine J., Winn, Alice A., Florida State University, College of Education, School of... Show moreStrimaitis, Anna Margaret, Southerland, Sherry A., Underwood, Nora, Andrews-Larson, Christine J., Winn, Alice A., Florida State University, College of Education, School of Teacher Education National recommendations for undergraduate biology education call for orchestrating opportunities for students to "figure out" scientific explanations in the classroom setting by engaging in similar disciplinary practices and discourses as scientists. One approach to realize this vision, ambitious science teaching, describes four essential practices, each of which emphasizes classroom talk as an essential feature of student understanding. However, a critical element of reform is the... Show moreNational recommendations for undergraduate biology education call for orchestrating opportunities for students to "figure out" scientific explanations in the classroom setting by engaging in similar disciplinary practices and discourses as scientists. One approach to realize this vision, ambitious science teaching, describes four essential practices, each of which emphasizes classroom talk as an essential feature of student understanding. However, a critical element of reform is the instructor, who translates and enacts recommended practices in the classroom. This dissertation examines three specific aspects of ambitious science teaching in the context of an undergraduate biology laboratory course: how teaching assistants (TAs) take up the ambitious science teaching practice of eliciting and responding to student ideas, how TAs use positioning acts to support or constrain students' opportunities to engage in rigorous scientific discourse, and how engaging students in ambitious science teaching practices is mutually supportive for both the TAs develop as a professional scientist and the students' development of proficiency in science. The first study described how thirteen undergraduate biology TAs enacted one ambitious practice, eliciting and responding to students' initial and unfolding ideas, in a general biology laboratory course for nonscience majors before and after one semester of targeted professional development. Each participant was videotaped teaching the same lesson at the beginning of his or her first and second semesters as a TA. These videos were transcribed and coded for ambitious and conservative discursive moves. The findings describe four common profiles for how TAs changed in their practice of eliciting and responding to student ideas after one semester, with one profile eliciting more rigorous student discourse, one profile eliciting less rigorous student discourse, and two profiles fall in the middle of the spectrum. Implications for TA professional development are discussed. The next study was based on the premise that classrooms are complex systems, with a variety of factors influencing the teaching and learning that takes place within the system, including how teachers enact instructional practices. Teachers may translate and enact the same instructional practice differently, which could have important consequences for student learning opportunities. This study examined TA views about the role of the TA and the role of the students in classroom conversations and how these views supported or constrained opportunities for students to engage in scientific discourse. Using qualitative case study methodology, I examined how five TAs enacted whole class conversations in four different lab investigations over two different semesters. Using positioning acts as an analytical lens, the data were analyzed to develop themes describing how the role of the TA and the students was signaled in these five classrooms. The findings illustrated how TAs who positioned students as critical contributors to scientific conversations created opportunities for students to engage in scientific discourse while TAs who self-positioned as the authority on biology knowledge limited opportunities for students to engage in scientific discourse. Implications for classroom practice are discussed. The final study is based on the premise that, due to the calls for reforming undergraduate biology education, biology TAs are increasingly responsible for enacting student-centered instruction. However, TAs must balance coursework, research and teaching responsibilities, and teaching responsibilities are seldom considered opportunities to develop biology expertise needed as a professional scientist. However, some evidence suggests that using ambitious science teaching practices that engage students in the practices and discourses of science actually supports the TA in developing scientific expertise. This research investigated this link by examining how TAs organize biological knowledge before and after teaching a general biology lab curriculum that supported ambitious pedagogy. It also examined the relationship between knowledge organization and instructional practices. To capture changes in TA's knowledge organization, they completed a card-sorting task at the start and end of the semester. To capture instructional practices, TAs were videotaped teaching the same lab at the beginning of two consecutive semesters. The conversations in these teaching episodes were transcribed and TA talk was coded for ambitious discourse moves. TA knowledge organization was significantly more sophisticated after one semester of teaching experience. The sophistication of TA's knowledge organization was also positively related to their use of ambitious discourse moves to elicit and respond to student contributions. This relationship suggests a mutually supportive connection between ambitious teaching practice and disciplinary expertise. Implications for TA professional development are discussed. FSU_2017SP_Strimaitis_fsu_0071E_13831 Amelioration of Anxiety Sensitivity Cognitive Concerns: Exposure to Dissociative Symptoms. Norr, Aaron Martin, Schmidt, Norman B., Winegardner, Mark, Li, Wen (Professor of Psychology), Cougle, Jesse R. (Jesse Ray), McNulty, James, Florida State University, College of... Show moreNorr, Aaron Martin, Schmidt, Norman B., Winegardner, Mark, Li, Wen (Professor of Psychology), Cougle, Jesse R. (Jesse Ray), McNulty, James, Florida State University, College of Arts and Sciences, Department of Psychology Anxiety sensitivity (AS) has become one of the most well researched risk factors for the development of psychopathology. Research has found that the AS subfactor of cognitive concerns may play an important role in PTSD, depression, and suicide. AS reduction protocols commonly use interoceptive exposure (IE), or exposure to bodily sensations, to reduce AS. However, current IE paradigms (e.g., CO2 inhalation, straw breathing, hyperventilation) primarily induce physical anxiety symptoms (e.g.,... Show moreAnxiety sensitivity (AS) has become one of the most well researched risk factors for the development of psychopathology. Research has found that the AS subfactor of cognitive concerns may play an important role in PTSD, depression, and suicide. AS reduction protocols commonly use interoceptive exposure (IE), or exposure to bodily sensations, to reduce AS. However, current IE paradigms (e.g., CO2 inhalation, straw breathing, hyperventilation) primarily induce physical anxiety symptoms (e.g., racing heart, dizziness), and thus might not be optimal for the reduction of AS cognitive concerns. Previous work has shown that fear reactivity during the induction of dissociative symptoms is uniquely associated with AS cognitive concerns, and therefore it is possible that repeated exposure to dissociative symptoms will result in habituation and decreased AS cognitive concerns. The current study investigated whether repeated exposure to the induction of dissociative symptoms would reduce AS cognitive concerns, and thus be viable as an IE component of treatments directly targeting AS cognitive concerns. Participants (N = 50) who scored at or above 1 SD above the mean on the ASI-3 cognitive subscale were randomly assigned to repeated exposure to dissociative symptoms through audio-visual stimulation or to a control condition (repeatedly listening to classical music). Results revealed that the classical music control condition resulted in significant decreases in AS cognitive concerns as compared the active dissociation exposure treatment. Unfortunately, these results do not support the viability of this exposure paradigm in the current format as a treatment for elevated AS cognitive concerns. Future directions include increasing the potency of the symptoms induced, increasing the number of exposures, and providing a stronger conceptual framework for the participants prior to undergoing the exposures. FSU_SUMMER2017_Norr_fsu_0071E_13096 Amy Beach's Cabildo: An American Opera. Powlison, Nicole M. (Nicole Marie), Seaton, Douglass, Fisher, Douglas L., Eyerly, Sarah, Pelkey, Stanley C., Florida State University, College of Music In June 1932 Amy Beach (1867–1944) arrived at her studio at the MacDowell Colony in Peterborough, New Hampshire, to begin working in one of the few major musical genres that she had yet to attempt in her career: an opera, called Cabildo. The libretto for the chamber opera, given to her by Atlanta author, playwright, and fellow Colonist Nan Bagby Stephens (1883–1946), was based on Stephens's own play with the same title. Cabildo's creators had to negotiate the multifaceted artistic expression... Show moreIn June 1932 Amy Beach (1867–1944) arrived at her studio at the MacDowell Colony in Peterborough, New Hampshire, to begin working in one of the few major musical genres that she had yet to attempt in her career: an opera, called Cabildo. The libretto for the chamber opera, given to her by Atlanta author, playwright, and fellow Colonist Nan Bagby Stephens (1883–1946), was based on Stephens's own play with the same title. Cabildo's creators had to negotiate the multifaceted artistic expression of American identity through the work's music and plot. The English libretto, rich with local color in sections of dialect, was blended with the sounds of Creole folk tunes and Beach's own art songs to spin a romantic tale of dashing pirates and ghostly lovers, set during the height of the Battle of New Orleans, the final conflict of the War of 1812. Well received at its modest premiere at the University of Georgia in 1945, Beach's only opera remains unpublished and rarely studied or performed. A primary component of this project is the critical edition of Amy Beach's only opera, Cabildo, op. 149, completed in 1932. This edition is prepared to professional publication standards and provides a resource for both scholarly study and performance. The original draft and performance manuscripts of the score from archives at the University of New Hampshire and the University of Missouri-Kansas City provide the source material for the edition. In addition to the critical edition, the dissertation provides a "thick description" of Cabildo that locates the opera in its historical context. Relying on primary source materials such as Beach's diaries and newspaper or magazine articles, this description places the work in the composer's life as a new type of project that comes at the very end of her career, and an opportunity to collaborate with another woman. The description also identifies musical works with contemporary themes or methods to draw comparisons between Cabildo and its contemporaries, placing it in the context of the varied styles of American English-language opera in the 1930s. Cabildo represents one way in which a particular American opera expresses national identity through music and plot. Composers such as Beach attempted to negotiate the aesthetic conundrum that demanded that American art music be of the highest cosmopolitan standards while still having something distinctly "American" about it. This was complicated by the desire of many composers to incorporate American musics that lay outside the European musical heritage, including the simultaneously local and exotic musical materials from African American and Native American cultures. Cabildo exemplifies this complex negotiation of American identity in opera: it is an American opera with a libretto in English, treating a historical topic important to the history of the United States, yet incorporating a mix of Creole folk songs and dialect with music in Beach's own style, all set in the exotic location of New Orleans. By uniting a significant event from national history with a distinctive regional music set in a familiar Romantic style, Beach and Stephens created an opera that is at once intrinsically American and still appealing to a diverse and cosmopolitan audience. FSU_2017SP_Powlison_fsu_0071E_13860 An Analysis of FEMA Curricular Outcomes in an Emergency Management and Homeland Security Certificate Program— a Case Study Exploring Instructional Practice. Samples, Malaika Catherine, Schrader, Linda B., Brower, Ralph S., Gawlik, Marytza, Akiba, Motoko, Florida State University, College of Education, Department of Educational... Show moreSamples, Malaika Catherine, Schrader, Linda B., Brower, Ralph S., Gawlik, Marytza, Akiba, Motoko, Florida State University, College of Education, Department of Educational Leadership and Policy Studies In the United States, the higher education community is charged with the academic education of emergency management professionals. The present rate of natural disasters as well as the evolving threat of terrorist attacks have created a demand for practitioners who are solidly educated in emergency management knowledge, skills, and abilities. These conditions have in turn precipitated the aggressive growth of emergency management and homeland security academic programs in higher education,... Show moreIn the United States, the higher education community is charged with the academic education of emergency management professionals. The present rate of natural disasters as well as the evolving threat of terrorist attacks have created a demand for practitioners who are solidly educated in emergency management knowledge, skills, and abilities. These conditions have in turn precipitated the aggressive growth of emergency management and homeland security academic programs in higher education, characterized as the most relevant development in the field of emergency management (Darlington, 2008). With the goal of accelerating professionalization of emergency management occupations through higher education, the Federal Emergency Management Agency's (FEMA) Higher Education Program's research efforts focused on developing a set of evidence-based competencies for academic programs. These were outlined in FEMA's Curricular Outcomes (2011). This study explored how these evidence-based competencies are manifested in emergency management and homeland security academic programs and contributes to filling the gap in the literature on the implementation of FEMA's professional competencies in academic programs, a consequence of legal constraints prohibiting the direct collection of implementation data by federal agencies. The results indicated a wide range of competencies were represented in program coursework with gaps in alignment identified in the five competency areas. The analysis also revealed the exclusion of homeland security topics in Curricular Outcomes (2011) which led to issues of operationalization. Lastly, instructors shared feedback to improve alignment while the researcher discusses key conditions for similar use of a responsive evaluation framework in academic programs. 2018_Sp_Samples_fsu_0071E_14432 Analysis of High Switching Frequency Quasi-Z-Source Photovoltaic Inverter Using Wide Bandgap Devices. Kayiranga, Thierry, Li, Hui, Ordóñez, Juan Carlos, Edrington, Christopher S., Pamidi, Sastry V., Lipo, Thomas A., Florida State University, FAMU-FSU College of Engineering,... Show moreKayiranga, Thierry, Li, Hui, Ordóñez, Juan Carlos, Edrington, Christopher S., Pamidi, Sastry V., Lipo, Thomas A., Florida State University, FAMU-FSU College of Engineering, Department of Electrical and Computer Engineering FSU_SUMMER2017_Kayiranga_fsu_0071E_13964 Analysis of Non-Thermal Plasma Discharge Contacting Liquid Water Using Plasma Diagnostics and Computer Simulations. Wang, Huihui, Locke, Bruce R., Alabugin, Igor V., Chella, Ravindran, Alamo, Rufina G., Florida State University, College of Engineering, Department of Chemical and Biomedical... Show moreWang, Huihui, Locke, Bruce R., Alabugin, Igor V., Chella, Ravindran, Alamo, Rufina G., Florida State University, College of Engineering, Department of Chemical and Biomedical Engineering Non-thermal plasma technology, which can be used as an advanced oxidation process (AOP) for water treatment, has gained significant attention recently. A plasma discharge contacting liquid water generates strong oxidants, such as ·OH and H2O2 and, in the presence of O2, ozone (O3), which are capable of degrading or completely mineralizing many organic pollutants in waste water. The UV irradiation generated during the plasma discharge can enhance the degradation of organic compounds and kill... Show moreNon-thermal plasma technology, which can be used as an advanced oxidation process (AOP) for water treatment, has gained significant attention recently. A plasma discharge contacting liquid water generates strong oxidants, such as ·OH and H2O2 and, in the presence of O2, ozone (O3), which are capable of degrading or completely mineralizing many organic pollutants in waste water. The UV irradiation generated during the plasma discharge can enhance the degradation of organic compounds and kill bacteria. Compared with other water treatment methods, the non-thermal plasma technology removes the pollutants completely and rapidly, and it does not introduce any new hazardous materials into the system. However, the high energy cost of non-thermal plasma technology prevents it from being commercialized. Therefore, many studies have been conducted to improve the energy efficiency of the non-thermal plasma technology. This dissertation focused on investigating the influence of operating conditions and the plasma properties on the production of H2O2 during the plasma discharge with liquid water. H2O2 is one of the most important products which indirectly indicates the concentration of ·OH generated by the plasma system. This work focused on the mechanism of H2O2 formation and analyzed the influence of plasma properties including the electron density and gas temperature on H2O2 production. The influence of operating conditions such as discharge power and carrier gases on plasma properties was also investigated. The results provide a general view of H2O2 formation in the plasma-liquid system and provide guidelines for modifying the plasma system to achieve higher efficiency. Another problem using non-thermal plasma to treat industrial waste water is that the high conductivity of waste water causes energy wastage since the current starts to flow through the liquid which generates heat. In addition, some plasma systems with low liquid conductivity tolerance cannot generate a discharge when liquid conductivity is high. Therefore, another goal of this work is to study the influence of liquid conductivity on plasma discharge with water and improve the liquid conductivity tolerance of the plasma system. 2018_Fall_Wang_fsu_0071E_14831 Analyzing Bone, Muscle and Adipose Tissue Biomarkers to Identify Osteosarcopenic Obesity Syndrome in Older Women. Jafarinasabian, Pegah, Ilich-Ernst, Jasminka Z., Contreras, Robert J. (Robert John), McGee, Daniel, Spicer, Maria T., Florida State University, College of Human Sciences,... Show moreJafarinasabian, Pegah, Ilich-Ernst, Jasminka Z., Contreras, Robert J. (Robert John), McGee, Daniel, Spicer, Maria T., Florida State University, College of Human Sciences, Department of Nutrition, Food and Exercise Sciences Osteosarcopenic obesity (OSO) is a recently identified geriatric syndrome characterized by simultaneous presence of osteopenia/osteoporosis, sarcopenia, and increased adiposity either as overt overweight or fat infiltration into bone and muscle. The diagnostic criteria for OSO are just being established, but there are no data on biomarkers that might characterize this syndrome. Our objective was to examine possible biomarkers associated with OSO syndrome, including serum sclerostin (as a... Show moreOsteosarcopenic obesity (OSO) is a recently identified geriatric syndrome characterized by simultaneous presence of osteopenia/osteoporosis, sarcopenia, and increased adiposity either as overt overweight or fat infiltration into bone and muscle. The diagnostic criteria for OSO are just being established, but there are no data on biomarkers that might characterize this syndrome. Our objective was to examine possible biomarkers associated with OSO syndrome, including serum sclerostin (as a hinder of bone formation), skeletal muscle-specific troponin T (sTnT) (as an indicator of muscle turnover/damage) and serum leptin and adiponectin (measurement for fat metabolism). Additionally, we analyzed C-reactive protein (CRP) and serum lipids to evaluate the level of inflammation and lipid profile, respectively. A total of n=59 healthy Caucasian women ≥65 years were classified into 4 groups based on their bone and body composition profile identified by Dual-energy X-ray Absorptiometry (DXA) measurements: 1) Osteopenic obese (N=35); 2) Obese-only (N=10); 3) OSO (N=10); 4) Osteopenic/sarcopenic non-obese (N=4). Osteopenia/osteoporosis was determined by T-score of L1-L4 and/or femoral neck≤-1. Appendicular lean mass adjusted for both height and fat mass. Appendicular lean mass residual value of ≤ -1.43 was used as the cut-off point for diagnosing sarcopenia. Obesity included women with body fat percentage ≥32%. Serum samples were analyzed using ELISA kits for sclerostin, sTnT, leptin and adiponectin. CRP and Lipid profile were analyzed at Tallahassee Memorial Hospital using the latex amino assay and lipoprotein assay, respectively. In addition, diet and habitual physical activity were evaluated by 3-day dietary record and the Allied Dunbar National Fitness Survey, respectively. Data were evaluated by Pearson's correlations and ANOVA followed by Tukey's tests with p≤0.05. Serum sclerostin was significantly higher in OSO and osteopenic obese group in comparison to the obese-only group. The sTnT concentrations were significantly higher in OSO group in comparison to osteopenic obese and obese-only group. Sclerostin was negatively correlated with bone mineral density/content (BMD/BMC) of all skeletal sites, and the relationship was statistically significant with femoral neck BMD/BMC. Moreover, there was a significant positive correlation between serum sclerostin and sTnT, indicating their simultaneous mediation in bone and muscle loss. The highest concentrations of serum leptin were observed among OSO group. Women in OSO group had significantly greater leptin concentration than osteopenic/sarcopenic non-obese group. Serum leptin was significantly negatively correlated with left femoral neck BMD and T-score, total BMC and left femur BMC after adjusting for weight or BMI. Statistically significant negative correlation of serum adiponectin with body fat percentage was noted, as well as with the BMD and T-scores of several skeletal sites, including total femur and femoral neck. The osteopenic/sarcopenic non-obese group had the highest level of adiponectin (µg/mL) in comparison to other groups. These results confirm the negative relationship between adiposity and serum adiponectin. Significant negative correlation between serum leptin and BMD in groups with increased body fat may indicate its mediating role between bone and body adiposity. The CRP concentrations for all participants ranged from 0.01 to 1.43 mg/dL. None of the CRP concentrations were above the high threshold (3.0 mg/dL). Although the highest concentrations of CRP were observed among OSO group, there was no significant difference between groups. The highest concentrations of cholesterol and low density lipoprotein (LDL) were observed in the OSO group. Moreover, the highest concentrations of triglyceride were observed among the osteopenic obese group vs lowest for osteopenic/sarcopenic non-obese group. Although, the lowest amount of energy intake was observed among OSO group, there was no significant difference among groups. Moreover, there was no significant difference in amount of vitamin D and calcium intake among the groups. The lowest amount of protein intake was observed in the OSO group; however, there was no significant difference among groups. There was a significant positive correlation between total calcium intake and lean/fat ratio. Moreover, there was a significant negative correlation between amount of protein intake and waist/hip ratio. In conclusion, women identified with OSO syndrome have presented with the poorest outcomes for each variable. The combination of high concentration of sclerostin, sTnT, leptin and low adiponectin can be used to better identify the metabolic profile of OSO syndrome and possibly apply as measurements for its diagnostic criteria. FSU_2017SP_JafariNasabian_fsu_0071E_13697 Anchoring Power through Identity in Online Communication: The Trayvon Martin and Daniela Pelaez Cases. Mauney, Heather T., Rohlinger, Deana A., Schmertmann, Carl P., Sanyal, Paromita, Ueno, Koji, Florida State University, College of Social Sciences and Public Policy, Department... Show moreMauney, Heather T., Rohlinger, Deana A., Schmertmann, Carl P., Sanyal, Paromita, Ueno, Koji, Florida State University, College of Social Sciences and Public Policy, Department of Sociology This dissertation research uses the analysis of internet-based comments on two major news stories to study the role of identity in anchoring power during discursive participation. For this purpose, identity includes the categorical group memberships that people may place themselves or others into, such as gender, race, or occupation. Identity, as an anchor, is used as a resource for the purpose of linking one's wishes to power, with power being the amount of preferential treatment given to... Show moreThis dissertation research uses the analysis of internet-based comments on two major news stories to study the role of identity in anchoring power during discursive participation. For this purpose, identity includes the categorical group memberships that people may place themselves or others into, such as gender, race, or occupation. Identity, as an anchor, is used as a resource for the purpose of linking one's wishes to power, with power being the amount of preferential treatment given to any particular identity in determining the course of events or proper direction of discussion. The Daniela Pelaez case and Trayvon Martin case were each selected for making national headlines at approximately the same time, both occurring in the same state, and both being in reference life altering circumstances for minority teenagers, yet representing different outcomes. A content analysis of news comment board posts for the Daniela Pelaez and Trayvon Martin cases has been performed to ascertain the use of identity in comments and prevalence of particular identities, the use of identity to anchor power, the acknowledgement of identities by readers, and the conditions under which identities were used. One article for each case was selected from the same national news source, with an analysis completed for the first 1,000 comments on each article. Identity used as an anchor to power is found to exist, but only has a significant interaction with presentation of an argument for the Martin case. This indicates that the association between anchoring identity and presenting an argument can vary by news story. Identity as an anchor itself varies with race, and is dependent on how race relates to the news story. It is also found that anchoring is more dependent on authors' expectations of what others will consider important than it is effective on readers' actual recordable reactions. FSU_FALL2017_Mauney_fsu_0071E_13216 The Anglo-Saxon Chronicle in the Seventeenth Century: Transmission, Translation, Reception. Day, Patrick V., Johnson, David F. (David Frame), Brewer, Charles E. (Charles Everett), Coldiron, A. E. B. (Anne Elizabeth Banks), Boehrer, Bruce Thomas, Florida State... Show moreDay, Patrick V., Johnson, David F. (David Frame), Brewer, Charles E. (Charles Everett), Coldiron, A. E. B. (Anne Elizabeth Banks), Boehrer, Bruce Thomas, Florida State University, College of Arts and Sciences, Department of English The sixteenth and seventeenth centuries saw the rise of an intense interest in Anglo-Saxon history and artifacts that accompanied the transcription, translation, and dissemintation of the contents of England's monastic libraries following the Reformation begun in the 1530s. The tide of religious reform turned to more secular, legal concerns under the two early Stuart kings, and the pre-Norman past was used to simultaneously legitimize and criticize early-seventeenth-century monarchy and its... Show moreThe sixteenth and seventeenth centuries saw the rise of an intense interest in Anglo-Saxon history and artifacts that accompanied the transcription, translation, and dissemintation of the contents of England's monastic libraries following the Reformation begun in the 1530s. The tide of religious reform turned to more secular, legal concerns under the two early Stuart kings, and the pre-Norman past was used to simultaneously legitimize and criticize early-seventeenth-century monarchy and its ancient privileges by free monarchists and constitutionalists, respectively. Much of the modern criticism surrounding the constitutional crises of the reigns of James VI and I and Charles I as it relates to the Anglo-Saxon past focuses on Bede and the Benedictine Reformers of the tenth century. The present study, however, considers an often-cited text typically relegated to the periphery: The Anglo-Saxon Chronicle. The Chronicle makes its debut in print under the direction of Abraham Wheelock and the Cambridge University Press in 1643. The annalistic history appears alongside Bede's Historia Ecclesisatica, and, in the 1644 reprint and augmentation, the laws from Ine to Alfred and the later Anglo-Norman kings. Wheelock's editio princeps of the Anglo-Saxon Chronicle appears at the height of the First English Civil War in 1643, and it is often treated by modern critics as an appendix to the Old English Historia to which it is attached. This dissertation argues that the Chronicle is not peripheral, and that it participates in a larger royalist campaign to establish the West Saxons as the institutional forbears of the first two Stuart kings. The opening chapters establish Wheelock and his literary circle as participants in the ongoing constitutional debate that culminated in the Personal Rule of Charles in 1629 and the opening years of the Civil Wars a decade later. After the political alleigances of those who surround the production of the 1643 Chronicle have been thoroughly considered, the focus of this study then turns to the text of the Chronicle itself. Wheelock inserts himself into the Chronicle's narrative by means of excision, substitution, and inconsistent translation so that the Chronicle may more easily conform to early modern perceptions of kingship. Specifically, his intervention into and manipulation of the genealogical West Saxon Regnal Table and his interpretation of the advisory body of the Anglo-Saxons known as the witan provide a lens through which to read the medieval Chronicle as a polticial document suitable for seventeenth-century purposes. Lastly, this dissertation traces the influences of the 1643 edition upon the only other Chronicle printed in that century—the 1692 version compiled and edited by Bishop Edmund Gibson. This final chapter argues that Gibson, like Wheelock, uses the Chronicle for political, and in the latter antiquary's case, nationalistic ends. FSU_2017SP_Day_fsu_0071E_13770 An Annotated Analysis of the Choral Settings of Sara Teasdale's Literary 'Songs'. Ridgley, Hillary J., Thomas, Andre J. (Andre Jerome), Stebleton, Michelle, Bowers, Judy K. (Judy Kay), Kelly, Steven N., Florida State University, College of Music The purpose of this study is to provide conductors with a comprehensive annotated rubric of choral works based on the poetry of Sara Teasdale. Each Teasdale poem, printed in its original form, is presented with accompanying historically relevant information and a brief analysis. Choral settings based on the poems are annotated by following a rubric that includes: title, poem, composer/arranger, publisher, voicing, instrumentation, meter, key signature, text, tonal concepts, and rhythmic... Show moreThe purpose of this study is to provide conductors with a comprehensive annotated rubric of choral works based on the poetry of Sara Teasdale. Each Teasdale poem, printed in its original form, is presented with accompanying historically relevant information and a brief analysis. Choral settings based on the poems are annotated by following a rubric that includes: title, poem, composer/arranger, publisher, voicing, instrumentation, meter, key signature, text, tonal concepts, and rhythmic concepts. The tonal and rhythmic concepts indicate patterns singers should be able to independently read or may need to rehearse in advance of reading. The purpose of the design is to permit conductors to quickly gather information about the poetry and the music in order to expedite literature selection. The in-depth examination also allows conductors to program a broader range of settings of Teasdale poetry. FSU_2017SP_Ridgley_fsu_0071E_13719 Anti-Establishment Political Parties: Conception, Measurement, and Consequences. Cornacchione, Teresa Lee, Ehrlich, Sean D., Grant, Jonathan A., Weissert, Carol S., Gomez, Brad T., Beazer, Quintin H., Florida State University, College of Social Sciences and... Show moreCornacchione, Teresa Lee, Ehrlich, Sean D., Grant, Jonathan A., Weissert, Carol S., Gomez, Brad T., Beazer, Quintin H., Florida State University, College of Social Sciences and Public Policy, Department of Political Science The incredible rise of so-called "anti-establishment" parties in Europe has left scholars scrambling to define and classify the movement. Much scholarly attention has been paid to radical right wing parties, and the sources of their electoral support. While important and intriguing, the current literature has yet to develop a cohesive definition of the anti-establishment, and has too heavily used terms such as "populist," "anti-establishment," and "radical right-wing" interchangeably. Further... Show moreThe incredible rise of so-called "anti-establishment" parties in Europe has left scholars scrambling to define and classify the movement. Much scholarly attention has been paid to radical right wing parties, and the sources of their electoral support. While important and intriguing, the current literature has yet to develop a cohesive definition of the anti-establishment, and has too heavily used terms such as "populist," "anti-establishment," and "radical right-wing" interchangeably. Further, extant research has based theories of these parties' electoral support largely with the radical right-wing in mind, potentially ignoring theories that could explain support for these parties from the left, right, and center of the political spectrum. Finally, current research has not substantially explored how these parties, traditionally excluded from policy-making, behave once they are seated in parliaments. This dissertation aims to remedy these three shortcomings. First, I develop a conceptual definition and measurement scheme that encapsulates both ideological positioning and anti-establishment sentiment. Then, I explore how political trust in influences electoral support for anti-establishment parties positioned at all areas of the classic left-right spectrum. Finally, I analyze their parliamentary behavior, assessing their level of activity and their preferred policy domains. My findings underscore the importance of conceiving anti-establishment parties as existing along a unique dimension, separate from ideology, whose electoral viability can be explained via a unified theory, and who behave uniquely in parliament. 2018_Su_Cornacchione_fsu_0071E_14698 Antiprejudice among White Americans and the Proactive Fight to End Discrimination Toward Black Americans. Lacosse-Brannon, Jennifer, Plant, Ashby, Hightower, Patricia Y., Maner, Jon K., Conway, Paul, Kelley, Colleen M., Florida State University, College of Arts and Sciences,... Show moreLacosse-Brannon, Jennifer, Plant, Ashby, Hightower, Patricia Y., Maner, Jon K., Conway, Paul, Kelley, Colleen M., Florida State University, College of Arts and Sciences, Department of Psychology Despite social pressure for White Americans to be nonprejudiced, Black Americans still regularly experience discrimination. We argue that bias persists because although many White Americans espouse nonprejudiced beliefs, far fewer actively work to combat discrimination. Previous research on a newly developed scale of antiprejudice, or the belief that White people should proactively fight discrimination, indicates that higher levels of antiprejudice are associated increased proactive support... Show moreDespite social pressure for White Americans to be nonprejudiced, Black Americans still regularly experience discrimination. We argue that bias persists because although many White Americans espouse nonprejudiced beliefs, far fewer actively work to combat discrimination. Previous research on a newly developed scale of antiprejudice, or the belief that White people should proactively fight discrimination, indicates that higher levels of antiprejudice are associated increased proactive support among White people for multiple actions that would help put an end to discrimination. Drawing from research on prescriptive moral convictions (i.e., what people should do), we predicted that teaching White Americans four reasons why White people should be proactive in the fight against systemic racism would increase perceptions that White involvement is necessary in order for real change to occur and increase antiprejudiced beliefs. Results of a pilot study supported our predictions. Moreover, in a second study we replicated our results and extended them by demonstrating that our intervention not only increased perceptions about the necessity of White involvement and antiprejudice, it was also associated with a greater likelihood of volunteering for an equal rights organization. 2018_Sp_LaCosse_fsu_0071E_14406 Application and Analysis of the Extended Lawrence Teleoperation Architecture to Power Hardware-in-the-Loop Simulation. Langston, James, Edrington, Christopher S., Vanli, Omer Arda, Steurer, Michael, Roberts, Rodney G., Faruque, Md Omar, Florida State University, College of Engineering,... Show moreLangston, James, Edrington, Christopher S., Vanli, Omer Arda, Steurer, Michael, Roberts, Rodney G., Faruque, Md Omar, Florida State University, College of Engineering, Department of Electrical and Computer Engineering Power hardware-in-the-loop (PHIL) simulation is a technique whereby actual power hardware is interfaced to a virtual surrounding system through PHIL interfaces making use of power amplifiers and/or actuators. PHIL simulation is often an attractive approach for early integration testing of devices, allowing testing with unrealized systems with substantial flexibility. However, while PHIL simulation offers a number of potential benefits, there are also a number of associated challenges and... Show morePower hardware-in-the-loop (PHIL) simulation is a technique whereby actual power hardware is interfaced to a virtual surrounding system through PHIL interfaces making use of power amplifiers and/or actuators. PHIL simulation is often an attractive approach for early integration testing of devices, allowing testing with unrealized systems with substantial flexibility. However, while PHIL simulation offers a number of potential benefits, there are also a number of associated challenges and limitations stemming from the non-ideal aspects of the PHIL interface. These can affect the accuracy of the experiments and, in some cases, lead to instabilities. Consequently, accuracy, stability, and sensitivity to disturbances are some of the most important considerations in the analysis and design of PHIL simulation experiments, and the development of PHIL interface algorithms (IA) and augmentations for improvements in these areas is the subject of active research. Another area of research sharing some common attributes with PHIL simulation is the field of robotic bilateral teleoperation systems. While there are some distinctions and differences in characteristics between the two fields, much of the literature is also focused on the development of algorithms and techniques for coupling objects. A number of disparate algorithms and augmentations have also been proposed in the teleoperation literature, some of which are fundamentally very similar to those applied in PHIL simulation. While some of the teleoperation methods may have limited applicability in PHIL experiments, others have substantial relevance and may lend themselves to improvements in the PHIL application area. This work focuses on the application and analysis of a teleoperation framework in the context of PHIL simulation. The extended Lawrence Architecture (ELA) is a framework unifying and describing a large set of teleoperation interfacing algorithms. This work focuses on the application and analysis of the ELA to PHIL simulation. This includes the expression of existing PHIL IAs in the context of the ELA, derivation of relevant transfer functions and metrics for assessment of performance, the exploration of the implications of the transparency requirements, and the exploration of new IAs supported by the ELA which may be well suited to the particular characteristics of PHIL simulation. 2018_Sp_Langston_fsu_0071E_14321 Application of FT-ICR Mass Spectrometry in Hydrogen Deuterium Exchange and Lipidomics. Liu, Peilu, Marshall, Alan G., Tang, Hengli, Dorsey, John G., Miller, Brian G, Florida State University, College of Arts and Sciences, Department of Chemistry and Biochemistry High resolution mass spectrometry, especially Fourier transform ion cyclotron resonance mass spectrometry (FT ICR MS) is a widely practiced technique of choice in proteomics and lipidomics due to its high sensitivity, reproducibility and wide dynamic range. FT-ICR MS enables quick assignments of hundreds of peptides and lipids with extreme complexity. Chapter 1 introduces the fundamental of FT ICR phenomena for mass measurement and basic theories of LC-MS based hydrogen deuterium exchange ... Show moreHigh resolution mass spectrometry, especially Fourier transform ion cyclotron resonance mass spectrometry (FT ICR MS) is a widely practiced technique of choice in proteomics and lipidomics due to its high sensitivity, reproducibility and wide dynamic range. FT-ICR MS enables quick assignments of hundreds of peptides and lipids with extreme complexity. Chapter 1 introduces the fundamental of FT ICR phenomena for mass measurement and basic theories of LC-MS based hydrogen deuterium exchange (HDX) technique for high order structure studies. Chapter 1 also introduces the application of mass spectrometry in lipidomics including lipid classification and MS analysis. Chapter 2 described the characterization of the binding interfaces in R2TP complex by hydrogen/deuterium exchange mass spectrometry. The two closely related AAA+ family ATPase Rvb1 and Rvb2 form a tight functional complex with two Hsp90 interactors: Pih1p and Tah1p. The R2TP complex involves in multiple biological processes including apoptosis, PIKK signaling, and RNA polymerase II assembly, and snoRNP biogenesis. The current lack of structural information on R2TP complex prevents a mechanistic understanding of many biological processes. By use of solution-phase HDX MS, we probed the contact surfaces on Pih1p-Tah1p upon Rvb1/2p binding. The present results demonstrate that Pih1p-Tah1p interacts with Rvb1/2p through N-terminal and IDR2 regions of Pih1p. Significantly, HDX also detected a rearrangement of residues 38–60 of Pih1p and 1–44 of Tah1p upon formation of the R2TP complex. Chapter 3 depicts the study of conformations of activated, disease-associated glucokinase variants by a comparative hydrogen/deuterium exchange mass spectrometry. Human glucokinase (GCK) acts as the body's primary glucose sensor and plays a critical role in glucose homeostatic maintenance. Previous biochemical and biophysical studies suggest the existence of two activated variants. HDX results demonstrate that a disordered active site, which is folds upon binding of glucose, is protected from exchange in α helix variant. Additionally, α helix variant displays an increased level of exchange near enzyme's hinge region. In contrast, β hairpin variant does not show substantial difference in global or local exchange relative to that of wild type GCK. The work elucidates the structural and dynamics origins of GCK's unique kinetic cooperativity. Chapter 4 investigated the structure of an antibody with 'Knob-into-hole' mutations by HDX MS. Bispecific antibodies (BsAbs) have flourished in the biopharmaceutical industry for targeting two distinct antigens simultaneously. 'Knob-into-hole' approach is a way to manufacture bispecific antibodies. The applicability and advantage of 'Knob-into-hole' engineered bispecific antibody is vast. However, concerns about the conformational change and immunogenicity risks posed by the new approach has have been raised. To better understand the conformations and dynamics impacted by the 'knob' and 'hole' mutations, HDX MS is used to characterize peptide-level conformational changes of a 'Knob-into-hole' engineered antibody. The study shows that there is no significant structural alternation induced by 'Knob-into-hole' framework. In Chapter 5, the applicability of resolving HDX-derived isotopic fine structure by ultrahigh resolving power FT ICR mass spectrometry was discussed. In an HDX experiment, labeling protein with deuterium causes the deuterium incorporation, resulting in distributions of various combinations of 13C1H and 12C2H (Δm = 2.9 mDa). The isotopic fine structure typically cannot be used to evaluate deuteration level due to the difficulty of .resolving fine structures for all proteolytic peptides spanning wide mass range from HDX experiments. The introduction of hexapolar cell triples the observed resolving power on 14.5 tesla FT-ICR mass spectrometer, thus we successfully extend the capability of resolving isotopic fine structure to most of identified peptides. Additionally, a new method of analysis of isotopic fine structure-resolved HDX data was proposed to determine degrees of deuterium incorporation. Another research area I have worked on is characterization of polar lipids by LC coupled with FT-ICR mass spectrometry. Algae lipids contain long-chain saturated and polyunsaturated fatty acids. The lipids may be transesterified to generate biodiesel fuel. In Chapter 6, I compared polar lipid compositions for two microalgae, Nannochloropsis oculata and Haematococcus pluvialis, that are prospective lipid-rich feedstock candidates for an emerging biodiesel industry. Online nano liquid chromatography coupled with negative electrospray ionization 14.5 T Fourier transform ion cyclotron resonance mass spectrometry ((−) ESI FT-ICR MS) with newly modified ion optics provides ultrahigh mass accuracy and resolving power to identify hundreds of unique elemental compositions. Assignments are confirmed by isotopic fine structure for a polar lipid extract. Collision-induced-dissociation (CID) MS/MS provides additional structural information. H. pluvialis exhibits more highly polyunsaturated lipids than does N. oculata. 2018_Su_Liu_fsu_0071E_14651 Application of Particle Tracking Velocimetry to Thermal Counterflow and Towed-Grid Turbulence in Helium II. Mastracci, Brian, Guo, Wei, Piekarewicz, Jorge, Oates, William, Taira, Kunihiko, Florida State University, College of Engineering, Department of Mechanical Engineering The superfluid phase of helium-4, known as He~II, is predominantly used to cool low-temperature devices. It transfers heat by a unique thermally driven counterflow of its two constituents, a classical normal fluid and an inviscid superfluid devoid of entropy. It also has potential use for economical reproduction and study of high Reynolds number turbulent flow due to the extremely small kinematic viscosity and classical characteristics exhibited by mechanically driven flow. A number of... Show moreThe superfluid phase of helium-4, known as He~II, is predominantly used to cool low-temperature devices. It transfers heat by a unique thermally driven counterflow of its two constituents, a classical normal fluid and an inviscid superfluid devoid of entropy. It also has potential use for economical reproduction and study of high Reynolds number turbulent flow due to the extremely small kinematic viscosity and classical characteristics exhibited by mechanically driven flow. A number of diagnostic techniques have been applied in attempts to better understand the complex behavior of this fluid, but one of the most useful, flow visualization, remains challenging because of complex interactions between foreign tracer particles and the normal fluid, superfluid, and a tangle of quantized vortices that represents turbulence in the superfluid. An apparatus has been developed that enables application of flow visualization using particle tracking velocimetry (PTV) in conjunction with second sound attenuation, a mature technique for measuring quantized vortex line density, to both thermal counterflow and mechanically-driven towed-grid turbulence in He~II. A thermal counterflow data set covering a wide heat flux range and a number of different fluid temperatures has been analyzed using a new separation scheme for differentiating particles presumably entrained by the normal fluid ("G2") from those trapped on quantized vortices ("G1"). The results show that for lower heat flux, G2 particles move at the normal fluid velocity vn, but for higher heat flux all particles move at roughly vn/2 ("G3"). Probability density functions (PDFs) for G1 particle velocity vp are Gaussian curves with tails proportional to |vp|⁻³, which arise from observation of particles trapped on reconnecting vortices. A probable link between G1 velocity fluctuations and fluctuations of the local vortex line velocity has been established and used to provide the first experimental estimation of c₂, a parameter related to energy dissipation in He~II. Good agreement between the length of observed G2 tracks and a simple model for the mean free path of a particle traveling through the vortex tangle suggests that flow visualization may be an alternative to second sound attenuation for measurement of vortex line density in steady-state counterflow. Preliminary PTV and second sound data in decaying He~II towed-grid turbulence shows agreement with theoretical predictions, and enables reliable estimation of an effective kinematic viscosity and calculation of longitudinal and transverse structure functions, from which information about the energy spectrum evolution and intermittency enhancement can be obtained. 2018_Fall_Mastracci_fsu_0071E_14818 Applications of 21 Tesla FT-ICR Top-down Proteomics in Clinical Research and Diagnosis. He, Lidong, Marshall, Alan G., Tang, Hengli, Dorsey, John G., Hu, Yan-yan, Florida State University, College of Arts and Sciences, Department of Chemistry and Biochemistry With recent progress in clinical proteomics, mass spectrometry (MS)-based methods have been widely implemented in diagnosis of diseases which offers high specificity that conventional clinical tests lack. The recent development of high resolution high mass accuracy mass spectrometry leads to full characterization of intact proteins (e.g., therapeutic monoclonal antibodies and endogenous protein biomarkers) in a top-down MS/MS fashion. Top-down MS/MS offers "birds' eye" view of the proteins... Show moreWith recent progress in clinical proteomics, mass spectrometry (MS)-based methods have been widely implemented in diagnosis of diseases which offers high specificity that conventional clinical tests lack. The recent development of high resolution high mass accuracy mass spectrometry leads to full characterization of intact proteins (e.g., therapeutic monoclonal antibodies and endogenous protein biomarkers) in a top-down MS/MS fashion. Top-down MS/MS offers "birds' eye" view of the proteins and yields more confident protein sequence assignment and post-translational modifications localization. This dissertation describes the latest top-down applications in disease precision diagnosis that can potentially lead to future personalized treatment. Chapter 2 describes a pilot study for characterization of monoclonal antibodies by top-down and middle-down approaches with the advantages of fast sample preparation with minimal artifacts, ultrahigh mass accuracy, and extensive residue cleavages by use of 21 tesla FT-ICR MS/MS. The ultrahigh mass accuracy yields an rms error of 0.2–0.4 ppm for antibody light chain, heavy chain, heavy chain Fc/2, and Fd subunits. The corresponding sequence coverages are 81%, 38%, 72%, and 65% with MS/MS rms error ~4 ppm. Extension to a monoclonal antibody in human serum as a monoclonal gammopathy model yielded 53% sequence coverage from two nano-LC MS/MS runs. A blind analysis of five therapeutic monoclonal antibodies at clinically relevant concentrations in human serum resulted in correct identification of all five antibodies. Nano-LC 21 T FT-ICR MS/MS provides nonpareil mass resolution, mass accuracy, and sequence coverage for mAbs, and sets a benchmark for MS/MS analysis of multiple mAbs in serum. This is the first time that extensive cleavages for both variable and constant regions have been achieved for mAbs in a human serum background. Chapter 3 describes a novel protein de novo sequencing method given that top-down MS/MS complete sequence coverage is virtually impossible. To characterize the "AA sequence gap" between two adjacent fragments, the number of gap AA sequences with identical masses for di-, tri-, and tetra-AA gaps grows exponentially with increasing number of gap amino acids. If peptide fragment mass could be measured exactly (in practice, to 0.00001 Da), it would then be possible to define the overall atomic composition for the group of amino acids spanning a product ion gap 3-4 amino acids long. I show that de novo top-down/middle-down MS/MS can determine the germline sequence category for a given monoclonal antibody and further serve to identify its novel mutations. Chapter 4 applies my developed top-down protein de novo sequencing in characterization of serum monoclonal immunoglobulins from plasma cell disorders. The current five-year survival rate for systemic AL amyloidosis or multiple myeloma is below 50%, indicating the urgent need for better diagnosis methods and treatment plans. Unlike genomic testing, which requires bone marrow aspiration and may fail to identify all monoclonal immunoglobulins produced by the body, the present method requires only a blood draw. In addition, circulating monoclonal immunoglobulins spanning the entire population are analyzed and reflect the selection of germline sequence by B cells. The monoclonal immunoglobulin light chain FR2-CDR2-FR3 was sequenced by de novo MS/MS and 100% matched the gene sequencing result except for two amino acids with isomeric counterparts, enabling accurate germline sequence classification. This work represents the first application of top/middle-down MS/MS for de novo sequencing of endogenous monoclonal immunoglobulins with polyclonal immunoglobulins background. Chapter 5 is focused on top-down MS/MS diagnosis of hemoglobin disorders. Hemoglobinopathies and thalassemias are the most common genetically determined disorders. Current diagnosis methods include cation exchange high performance liquid chromatography and electrophoresis for screening whose results can be ambiguous because of limited resolving power, and expensive and laborious genetic testing is needed for confirmation. I developed a top-down MS/MS approach with the advantages of fast data acquisition (3 min), ultrahigh mass accuracy, and extensive residue cleavages by use of 21 tesla FT-ICR MS/MS for hemoglobin variant sequence de novo characterization and thalassemia diagnosis. With my developed generic approach for hemoglobin variant de novo sequencing, all eighteen hemoglobin variants were correctly identified in blind analysis which include the first characterization of homozygous hemoglobin Himeji variant. It is the first time that the abundance ratio between intact δ and β subunits (δ/β) is used for beta thalassemia (including beta thalassemia trait/major) screening. Therefore, 21 T FT-ICR MS sets the benchmark for top-down MS/MS analysis of hemoglobin variants and thalassemia. 2018_Su_He_fsu_0071E_14616 Applications of Advanced Magnetic Resonance Techniques to the Study of Molecule-Based Magnetic Materials. Greer, Samuel Michael, Hill, S., Shatruk, Mykhailo, Xiong, Peng, Steinbock, Oliver, Florida State University, College of Arts and Sciences, Department of Chemistry and Biochemistry The highly interdisciplinary study of molecular magnetism spans a wide array of topics, ranging from spintronics and quantum computing to enzyme function and MRI contrast agents. At the core of all these fields is the study of materials whose properties can be controlled through the rational design of molecules. The chemical tailoring of molecular magnetic properties can only be achieved by understanding the relationship between the physical and electronic structures. In this dissertation,... Show moreThe highly interdisciplinary study of molecular magnetism spans a wide array of topics, ranging from spintronics and quantum computing to enzyme function and MRI contrast agents. At the core of all these fields is the study of materials whose properties can be controlled through the rational design of molecules. The chemical tailoring of molecular magnetic properties can only be achieved by understanding the relationship between the physical and electronic structures. In this dissertation, the interplay between structure and physical properties is probed using a variety of magnetic resonance techniques. In Chapter 1, we give a succinct overview of the various methods utilized in this dissertation. We first describe the experimental methods including electron paramagnetic resonance (EPR), 57Fe nuclear gamma resonance (Mössbauer) spectroscopy, electron double resonance detected nuclear magnetic resonance (ELDOR-NMR), and Fourier transform far-infrared (FTIR) spectroscopy. In addition to the introduction of each technique, we describe how the data is analyzed and what quantities may be extracted from each method. We also introduce the quantum chemical methods used to rationalize the spectroscopic parameters. In Chapter 2, we investigate a recently reported Fe-V triply bonded species, [V(iPrNPPh2)3FeI], using high frequency EPR (HFEPR), field- and temperature-dependent 57Fe Mössbauer spectroscopy, and high-field ELDOR-NMR. From the use of this suite of physical methods, we probe the electron spin distribution as well as the effects of spin-orbit coupling on the electronic structure. This is accomplished by measuring the effective g – factors as well as the Fe/V electro – nuclear hyperfine interaction tensors of the spin S = ½ ground state. We have rationalized these tensors in the context of ligand field theory supported by quantum chemical calculations. This combined theoretical and experimental analysis suggests that the S = ½ ground state originates from a single unpaired electron predominately localized on the Fe site. Chapter 3 describes a combined HFEPR and variable-field Mössbauer spectroscopic investigation of a pair of bimetallic compounds with Fe-Fe bonds, [Fe(iPrNPPh2)3FeR] (R = ≡NtBu and PMe3). Both of these compounds have high spin ground states, where R= PMe3 (S = 7/2) and the R= ≡NtBu displays (S = 5/2). The ligand set employed in this work encapsulates each Fe site in a different coordination environment. This results in polarized bonding orbitals which engender each nuclear site with unique hyperfine tensors as revealed by Mössbauer spectroscopy. Absent the metal-metal bond, the tris-amide bound site in both compounds is expected to be Fe(II). To gain insight into the local site electronic structure, we have concurrently studied a compound containing a single Fe(II) in a tris-amide site. Our spectroscopic studies have allowed us to assess the electronic structure via the determination of the zero field splitting parameters and 57Fe electronuclear-hyperfine tensors for the entire series. Through the insight gained in this study, we propose some strategies for the design of polymetallic single molecule magnets where the metal-metal interactions are mediated by the formation of covalent bonds between metal centers. Recently, a great deal of the work in molecular magnetism has moved away from polymetallic compounds and towards molecules containing only a single magnetic ion. A critical challenge in this endeavor is to ensure the preservation of orbital angular momentum in the groundstate. The stabilization of the ground state orbital moment generates the strong magnetic anisotropy which is often required for the design of magnetic materials. The presence of unquenched orbital angular momentum can be identified by significant shifts in the g-value away from the free ion value. In an initial report of a Ni(I) coordination complex, which was found to exhibit field-induced slow magnetic relaxation, no EPR signal was observed. Given the expectation that orbital angular momentum can shift the g-values beyond the range expected for a typical S= ½ system, we have reexamined this compound using multi-frequency EPR and field-dependent FTIR spectroscopy. Through a combined spectroscopic and theoretical effort, we have characterized the effect of first order spin-orbit coupling on the electronic structure. The final report, Chapter 5, examines an exciting new class of photomagnetic materials based on bisdithiazolyl radicals. These materials, and others with magnetic properties that can be modulated via optical excitation, offer enticing opportunities for the development of next generation technologies. The dimorphic system in this study crystallizes in two phases, one composed of diamagnetic dimers and the other of paramagnetic radicals. Here we report on the use of high-field electron paramagnetic resonance spectroscopy to characterize both the thermally- and light-induced transitions in the dimer phase. During the course of this study we show that signals originating from residual radical defects in the dimer phase can be differentiated from those arising from the radical phase. 2018_Fall_Greer_fsu_0071E_14848 Applications of Alkynogenic Fragmentation Products Derived from Vinylogous Acyl Triflates. Ramsubhag, Ron Robert, Fajer, Piotr G., Saltiel, Jack, Zhu, Lei, Florida State University, College of Arts and Sciences, Department of Chemical and Biomedical Engineering Carbon-carbon bond formation is the foundation to synthesizing complex molecules and has gathered the attention of many synthetic chemists. One must keep in mind that these reactions are dependent on materials for a specific agenda when tackling a structural framework, which may require additional steps to create, and at times, are difficult to prepare. As significant as C-C bond formation reactions are, these minor setbacks may draw caution when synthesizing a complicated molecule whose... Show moreCarbon-carbon bond formation is the foundation to synthesizing complex molecules and has gathered the attention of many synthetic chemists. One must keep in mind that these reactions are dependent on materials for a specific agenda when tackling a structural framework, which may require additional steps to create, and at times, are difficult to prepare. As significant as C-C bond formation reactions are, these minor setbacks may draw caution when synthesizing a complicated molecule whose structural framework cannot be easily accessed by the unity of two fragments. On the other hand, the less familiar C-C bond cleavage reactions have, over time, demonstrated the potential to generate unique structural building blocks that can be used to overcome certain obstacles that other synthetic methods cannot provide. Here, we will be focusing on concerted anionic five-center fragmentation reactions using vinylogous acyl triflates. The generated alkynogenic fragments will then be used in different applications. We will begin by looking at chemoselective "click" reactions. The strained-promoted alkyne is synthesized by a tandem intramolecular nucleophilic addition / fragmentation. The expanded ring will contain a strained cycloalkyne which will later be tethered to a terminal alkyne. The diyne will be used to provide an example of a "dual-click" coupling via SPAAC or CuAAC in either sequential order. Next, we will expand the tandem fragmentation / olefination methodology developed in this work to include dienynes. The dienyne provides the structural backbone needed to produce neopentylene indanes. This methodology is used to design new ibuprofen derivatives that demonstrate rigidity and increase hydrophobicity to modulate the molecular pharmacology of ibuprofen. FSU_FALL2017_Ramsubhag_fsu_0071E_14133 Applied Predictive Modeling for Measurement and Inference in International Conflict and Political Violence. Henrickson, Philip Edward, Souva, Mark A., Grant, Jonathan A., Carroll, Robert J., Ehrlich, Sean D., Florida State University, College of Social Sciences and Public Policy,... Show moreHenrickson, Philip Edward, Souva, Mark A., Grant, Jonathan A., Carroll, Robert J., Ehrlich, Sean D., Florida State University, College of Social Sciences and Public Policy, Department of Political Science Advances in computing and machine learning have enabled researchers to use many different tools to learn from data. This dissertation is devoted to using predictive modeling to learn from existing data in international conflict studies with the aim of offering new measures and insights for applied researchers in international relations. In the first chapter, I explore the expected cost of war, which is a foundational concept in the study of international conflict. However, the field currently... Show moreAdvances in computing and machine learning have enabled researchers to use many different tools to learn from data. This dissertation is devoted to using predictive modeling to learn from existing data in international conflict studies with the aim of offering new measures and insights for applied researchers in international relations. In the first chapter, I explore the expected cost of war, which is a foundational concept in the study of international conflict. However, the field currently lacks a measure of the expected costs of war, and thereby any measure of the bargaining range. I develop a proxy for the expected costs of war by focusing on one aspect of war costs - battle deaths. I train a variety of machine learning algorithms on battle deaths for all countries participating in fatal military disputes and interstate wars between 1816-2007 in order to maximize out of sample predictive performance. The best performing model (random forest) improves performance over that of a null model by 25% and a linear model with all predictors by 9%. I apply the random forest to all interstate dyads in the Correlates of War dataverse from 1816-2007 in order to produce an estimate of the expected costs of war for all existing country pairs in the international system. The resulting measure, which I refer to as Dispute Casualty Expectations (DiCE), can be used to fully explore the implications of the bargaining model of war, as well as allow applied researchers to develop and test new theories in the study of international relations. In the second chapter, I use these expected costs of war to explore another foundational concept in international relations: foreign threats. Researchers commonly theorize about the impact of a state's international security environment - that is the extent to which a state is threatened by other states - yet the field currently lacks a measure which can effectively proxy for expectations of conflict. In order to create a new measure of threat, I train a number of machine learning algorithms on fatal militarized disputes over the years 1870-2001. I aggregate the predictions from these models at the country level to create a new measure of international conflict expectations for all states. In so doing, I am able to revisit the causes of international conflict via a data-driven approach, as well as provide a new measure of foreign threat for applied researchers. Finally, in the third chapter, I make use of this new measure to assess how international security affects a state's human rights behavior. International relations scholars have increasingly relied on domestic institutions to explain international conflict but less work has focused on reversing the arrow. To this point, political violence scholars have principally relied on domestic factors to explain the conditions under which leaders use coercive means to maintain power. But, political leaders do not exist in a vacuum; their decision making is informed by international and domestic factors. Therefore, I rely on both a predictive and inferential approach to assess whether foreign threats matter for state repression. The measure of foreign threats does emerge as an important variable in predicting state repression, which suggests that there is a meaningful relationship between international security and human rights behavior. Additionally, I find some (limited) evidence that the measure is negatively related to human rights behavior: states with high levels of foreign threat are associated with higher levels of state repression. But this finding is sensitive to model specification and merits further inspection. 2018_Fall_Henrickson_fsu_0071E_14907 Approaching Rapture. Newberry, Jacob, Roberts, Diane, Kavka, Martin, Belieu, Erin, Winegardner, Mark, McVicar, Michael J., Florida State University, College of Arts and Sciences, Department of English For my creative dissertation I have written a memoir, titled Approaching Rapture. The memoir is a somewhat traditionally structured coming-of-age story. In particular, it details the long ordeal of coming to terms with my sexuality and overcoming the religious strictures I grew up with. Approaching Rapture begins with a day when I was seven years old. My mother woke from a dream in which Jehovah had told her to wake up, because Christ was coming at dawn. The opening scene depicts her rushing... Show moreFor my creative dissertation I have written a memoir, titled Approaching Rapture. The memoir is a somewhat traditionally structured coming-of-age story. In particular, it details the long ordeal of coming to terms with my sexuality and overcoming the religious strictures I grew up with. Approaching Rapture begins with a day when I was seven years old. My mother woke from a dream in which Jehovah had told her to wake up, because Christ was coming at dawn. The opening scene depicts her rushing me downstairs to watch for Christ through the window. We spent what we believed to be our last hours praying for the sinners who would remain. This dramatic opening serves as the background music for everything that comes after. There is a comparatively short first section that details my childhood, though the scenes depicted will also carry a great deal of emotional weight through the rest of the book. The childhood I depict is one of severe violence, both physical and psychic. It is not merely a chronicle of woe, however. There was a persistence of wonder throughout my earliest years, cultivated mostly by my mother, for whom the invisible and supernatural were omnipresent. My family was from a small town in southern Mississippi, and we were Southern Baptist, and we were often even more stridently fundamentalist than the other congregants. I was a fervent believer, and I was taught that demonic possession was not only real but common. Thus, when I began the terrifying experience of adolescence, I came to believe that I was possessed. What was really happening, of course, was that I was having the sexual awakening that all children have at that age. I was gay but unable to say that at the time, because homosexuality was literally the worst fate I could imagine. (It was much worse than, say, becoming a murderer or a rapist or some other kind of violent criminal.) I believed this fate to be so unthinkably evil because of the world I grew up in, where homosexuality was commonly depicted as the most hideous and diseased of all of Satan's traps. Yet there it was present in myself. It led me nearly to suicide. The memoir focuses on the year I spent as a member of a cult. I do not often use the word "cult" in the book, except as it appears in dialogue, because I am not adequately equipped to define what is and is not a "cult." Still, the book focuses on my year inside the group, which I joined because I loved a boy who was a member. He was also in love with me, and though we never consummated our relationship, we did continue to develop an intensity that is common to 15- and 16-year-olds. And, as with many such romances, ours was hidden. The main difference between ours and other teenage relationships was that we feared quite literally for our souls in the process. As the year progressed, the experience in that group (which was called the First Century Christian Fellowship) intensified. We met and prayed five nights a week, and after a month I was told by the group's leader, Martin, to dissolve all friendships outside the group. After about six months, I was no longer allowed to speak to my family, except in transactional terms (such as asking for lunch money), because they were not members of the group. The boy I loved (I call him "Michael" in the book) was not becoming as isolated, since his family had joined the group. Michael and I believed we were hiding our relationship well, but 16-year-olds in love rarely possess that skill. The middle section of the book depicts both the glorious experience of first love and the grotesque horror of our fear of being discovered. And, of course, we were discovered. I was subjected to an exorcism, after which I only had the strength to return to the group one more time. It was at that last meeting (which I didn't know was my last) where Martin prophesied that I would die by my own hand, in three years' time, if I didn't return. I didn't return. Paradoxically, Martin's prophecy was so damaging to my psyche that it may have saved me. I had given up my hopes for college or a life outside the church, and I left Martin's group with enough time to salvage my time and apply for higher education. I went to college and excelled, managing to wholly forget what had transpired. The three-year deadline came and went, and after it I spent a semester in Paris, where I studied piano at a conservatory. It was in Paris that I first came to terms with my sexuality, and where I had my first sexual experience. When I returned to Mississippi, I began having a series of nightmares so frightening that I began therapy. It was in therapy that I realized that all of my dreams were actually memories, and that I had been subjected to a greater deal of trauma than I had ever realized. And tucked away at the bottom of all these dreams was the memory of one person: Michael. I remembered how much I had loved him and how he had saved me from dying. It was then that I knew I would have to find him again. I would have to bring him back into my life. The last portion of the book depicts the almost-manic way I pursued Michael, first calling around for weeks to get his number, then finding a way to talk him into letting me visit him. The summer of 2007 I went home for the first time in years, since Michael was still living in our hometown. I lived with my mother that summer. Hurricane Katrina had destroyed most of the landscape two years prior, and that summer was my first extended time among the ruins. The book all leads to this summer, to my goal of getting Michael to move back with me and to become a permanent part of my life again. 2018_Sp_Newberry_fsu_0071E_14304 Architected Multiscale Polymer Foams. Ahmed, Mohammad Faisal, Zeng, Changchun, Shanbhag, Sachin, Liang, Zhiyong, Yu, Zhibin, Florida State University, College of Engineering, Department of Industrial and... Show moreAhmed, Mohammad Faisal, Zeng, Changchun, Shanbhag, Sachin, Liang, Zhiyong, Yu, Zhibin, Florida State University, College of Engineering, Department of Industrial and Manufacturing Engineering Polymeric foam materials continue to gather commercial and research interests due to their unique physical characteristics and emerging applications in a wide variety of industries. This research work viewed the polymer foam industry from three different perspectives, namely materials, processes and applications. Accordingly, technical challenges were carefully selected to make contributions towards each segment by expanding materials choice, proposing architected foam fabrication process and... Show morePolymeric foam materials continue to gather commercial and research interests due to their unique physical characteristics and emerging applications in a wide variety of industries. This research work viewed the polymer foam industry from three different perspectives, namely materials, processes and applications. Accordingly, technical challenges were carefully selected to make contributions towards each segment by expanding materials choice, proposing architected foam fabrication process and exploring multifunctional applications of foam sensors. Thermoplastic elastomers are thermally processable yet rubberlike materials which experience shrinkage when operated between the glass transition temperatures of soft rubbery phase and hard phase. This brings a challenge in making microcellular foams using batch foaming process where the materials are not fully melted to generate cellular structure. The issue was addressed in this research by incorporating a second phase (i.e. a polymer blend system) to perform as a shape fixing unit. Thus, a series of thermoplastic polyurethane (TPU) elastomeric foams were prepared by blending polylactic acid (PLA) as the shape fixing unit. The morphological, thermal and rheological behavior of the blend system was studied prior to foaming. The blends that contained PLA as the minor phase resulted in foams with high expansion ratio. These blend foams were compared to TPU foams in terms of shape fixity ratio. The results were fitted with Kohlrausch-Williams-Watts (KWW) function to estimate foam relaxation times. Foam relaxation time and shape fixity ratio increased with increasing PLA content. The glass transition temperature of PLA performed as the anchor point to stabilize the foam structure. Architected polymeric materials when designed for specific application could satisfy design requirements with desired unit cell design for having controllable pore size, pore density and pore connectivity. With development of additive manufacturing, fabrication of macro, micro and even nano porous structures have become a possibility. In this research, a new route to fabricating architected multiscale polymer foam is proposed and consequently studied in detail with a view to realizing its potential as a near net-shape process. The fabrication process utilized the synergy of additive manufacturing and batch foaming process to induce macro and microporosity (i.e. structural hierarchy). The results suggested that the process can generate foams with more than 95% expansion uniformity with significantly reduced saturation time. The process is also capable of handling a variety of thermoplastics which also includes polymer blends. Traditional applications of polymeric materials include insulation, energy absorption, floatation, packaging and so on. Though a relatively new concept, multifunctional foams have attracted the research community to develop and explore applications of materials that utilize foams as the skeleton. Such materials demonstrate sensing capability for having piezoresistive characteristics induced by conductive nanomaterials. Piezoresistive auxetic foams sensors coated with silver nanowire were prepared in this research to demonstrate their application as pressure sensors, 3D strain sensors, smart filtration and human motion interface. The auxetic foam sensors reported herein demonstrated improved piezoresistive properties compared to conventional counterpart and showed repeatable and reliable sensing performance for a variety of deformation modes. 2018_Su_Ahmed_fsu_0071E_14717 Arithmetic Aspects of Noncommutative Geometry: Motives of Noncommutative Tori and Phase Transitions on GL(n) and Shimura Varieties Systems. Shen, Yunyi, Marcolli, Matilde, Aluffi, Paolo, Chicken, Eric, Bowers, Philip L., Petersen, Kathleen L., Florida State University, College of Arts and Sciences, Department of... Show moreShen, Yunyi, Marcolli, Matilde, Aluffi, Paolo, Chicken, Eric, Bowers, Philip L., Petersen, Kathleen L., Florida State University, College of Arts and Sciences, Department of Mathematics In this dissertation, we study three important cases in noncommutative geometry. We first observe the standard noncommutative object, noncommutative torus, in noncommutative motives. We work with the category of holomorphic bundles on a noncommutative torus, which is known to be equivalent to the heart of a nonstandard t-structure on coherent sheaves of an elliptic curve. We then introduce a notion of (weak) t-structure in dg categories. By lifting the nonstandard t-structure to the t... Show moreIn this dissertation, we study three important cases in noncommutative geometry. We first observe the standard noncommutative object, noncommutative torus, in noncommutative motives. We work with the category of holomorphic bundles on a noncommutative torus, which is known to be equivalent to the heart of a nonstandard t-structure on coherent sheaves of an elliptic curve. We then introduce a notion of (weak) t-structure in dg categories. By lifting the nonstandard t-structure to the t-structure that we defined, we find a way of seeing a noncommutative torus in noncommutative motives. By applying the t-structure to a noncommutative torus and describing the cyclic homology of the category of holomorphic bundle on the noncommutative torus, we finally show that the periodic cyclic homology functor induces a decomposition of the motivic Galois group of the Tannakian category generated by the associated auxiliary elliptic curve. In the second case, we generalize the results of Laca, Larsen, and Neshveyev on the GL2-Connes-Marcolli system to the GLn-Connes-Marcolli systems. We introduce and define the GLn-Connes-Marcolli systems and discuss the existence and uniqueness questions of the KMS equilibrium states. Using the ergodicity argument and Hecke pair calculation, we classify the KMS states at different inverse temperatures β. Specifically, we show that in the range of n − 1 < β ≤ n, there exists only one KMS state. We prove that there are no KMS states when β < n − 1 and β ̸= 0, 1, . . . , n − 1,, while we actually construct KMS states for integer values of β in 1 ≤ β ≤ n − 1. For β > n, we characterize the extremal KMS states. In the third case, we push the previous results to more abstract settings. We mainly study the connected Shimura dynamical systems. We give the definition of the essential and superficial KMS states. We further develop a set of arithmetic tools to generalize the results in the previous case. We then prove the uniqueness of the essential KMS states and show the existence of the essential KMS stats for high inverse temperatures. FSU_SUMMER2017_Shen_fsu_0071E_13982 Art Education as a Means of Promoting Democracy: Preparing Pre-Service Art Teachers for Social Justice Education. Alazmi, Fatemah M., Shields, Sara Scott, Jones, Tamara Bertrand, Khurshid, Ayesha, Broome, Jeffrey L. (Jeffrey Lynn), Love, Ann Rowson, Florida State University, College of Fine... Show moreAlazmi, Fatemah M., Shields, Sara Scott, Jones, Tamara Bertrand, Khurshid, Ayesha, Broome, Jeffrey L. (Jeffrey Lynn), Love, Ann Rowson, Florida State University, College of Fine Arts, Department of Art Education The purpose of this qualitative study was to investigate the use of art as a pedagogical tool with pre-service art teachers in a graduate-level art education class. A curriculum was developed focusing on educational social justice theories and their application in regard to gender inequity and diversity issues. The goal was to lead students to engage in more self-directed learning and to become more pro-active in their society. The results indicate the value of using art making to help... Show moreThe purpose of this qualitative study was to investigate the use of art as a pedagogical tool with pre-service art teachers in a graduate-level art education class. A curriculum was developed focusing on educational social justice theories and their application in regard to gender inequity and diversity issues. The goal was to lead students to engage in more self-directed learning and to become more pro-active in their society. The results indicate the value of using art making to help students explore, investigate, and examine self and self in relation to society. In addition, they shed light on transformational moments in the art making process when students' awareness of self and social justice issues was heightened and democratic ideas were reinforced. The results have implications for classroom practice as well as enhancing the quality of art education by incorporating social justice concerns in art education for individual and community developments. FSU_SUMMER2017_Alazmi_fsu_0071E_14017 Art, History, and the Creation of Monastic Identity at Late Medieval St. Albans Abbey. Carter, Deirdre Anne, Emmerson, Richard Kenneth, Jones, Lynn, Johnson, David F. (David Frame), Killian, Kyle L., Leitch, Stephanie, Florida State University, College of Fine... Show moreCarter, Deirdre Anne, Emmerson, Richard Kenneth, Jones, Lynn, Johnson, David F. (David Frame), Killian, Kyle L., Leitch, Stephanie, Florida State University, College of Fine Arts, Department of Art History Although later medieval St. Albans Abbey has long been renowned as a preeminent center for the writing of historical chronicles, previous studies have not acknowledged that the monastic community also had a sustained tradition of visually representing the house's institutional history. This dissertation demonstrates that between the late eleventh and early sixteenth centuries, the monks of St. Albans depicted and evoked their abbey's past in a large and diverse collection of artworks, ranging... Show moreAlthough later medieval St. Albans Abbey has long been renowned as a preeminent center for the writing of historical chronicles, previous studies have not acknowledged that the monastic community also had a sustained tradition of visually representing the house's institutional history. This dissertation demonstrates that between the late eleventh and early sixteenth centuries, the monks of St. Albans depicted and evoked their abbey's past in a large and diverse collection of artworks, ranging from illuminated manuscripts and pilgrim badges to monumental paintings and architecture. Monastic historical imagery was rarely produced during the Middle Ages, but the images and objects from St. Albans present a remarkably rich and complete account of the abbey's history from the time of its illustrious origins through the eve of its dissolution. Using an interdisciplinary approach to contextualize these artworks within the monastery's history and traditions, this study argues that the visual historiography of St. Albans served as a potent vehicle for the expression and self-fashioning of the abbey's corporate identity and historical memory. As will be demonstrated, this vast corpus of imagery focuses on three fundamental elements of the monastery's past: Saint Alban and his early cult, the eighth-century foundation of the monastery by King Offa of Mercia, and the house's post-foundation history. Through these artworks, many of which have not previously received the attention of art historians, the monks of St. Albans documented, celebrated, and occasionally manipulated their abbey's long and distinguished history, thereby providing a compelling justification for its continued prosperity and prestige. FSU_FALL2017_Carter_fsu_0071E_14200 Athlete Transition: Effects of Coping on Self-Concept Clarity of NCAA Athletes. Cologgi, Kimberly A. (Kimberly Ann), Chow, Graig Michael, Newman, Joshua I., Tenenbaum, Gershon, Conway, P. (Paul), Florida State University, College of Education, Department of... Show moreCologgi, Kimberly A. (Kimberly Ann), Chow, Graig Michael, Newman, Joshua I., Tenenbaum, Gershon, Conway, P. (Paul), Florida State University, College of Education, Department of Educational Psychology and Learning Systems Understanding athlete transition is a complex process which involves many subjective pieces. A review of previous literature on athletic career termination has shown that two of the most highly debated topics include athletes' specific reason for retirement (Cockerill 2004; Orlick & Sinclair 1993; Webb, Nasco, Riley, & Headrick 1998), and the coping techniques employed by athletes during their transition period (Coakley 1983; Grove, Lavallee, & Gordon, 1997; Lavallee 2005; Sinclair & Orlick,... Show moreUnderstanding athlete transition is a complex process which involves many subjective pieces. A review of previous literature on athletic career termination has shown that two of the most highly debated topics include athletes' specific reason for retirement (Cockerill 2004; Orlick & Sinclair 1993; Webb, Nasco, Riley, & Headrick 1998), and the coping techniques employed by athletes during their transition period (Coakley 1983; Grove, Lavallee, & Gordon, 1997; Lavallee 2005; Sinclair & Orlick, 1993; Reynolds 1981). The purpose of this study was to examine important components involved in retirement from National Collegiate Athletic Association (NCAA) competitive athletics: self-concept clarity, athletic identity, willingness to retire, coping and overall life satisfaction. Self-concept clarity was conceptualized as the primary variable of focus because it tends to be internally consistent over time (Lodi-Smith & Roberts, 2010), and previous studies have shown that the effect of role exits and entries negatively predicts one's perceived self-concept clarity (Light & Visser, 2013). Participants were female (n=148) and male (n=89) former NCAA athletes from over 75 different Division I colleges and universities across the United States, ranging in age from 20 to 27 years old (M=22.47, SD=.837). They were to be no more than 12 months removed from their last NCAA game or practice, and the total number of months they were retired ranged from 1 to 12 months (M=7.77, SD= 2.1). Path analyses were used to determine which factors significantly contributed to self-concept clarity, and overall life satisfaction. Results revealed coping style, significantly mediated the relationship between athletic identity, willingness to retire, and self-concept clarity. Most importantly, emotion-focused coping lead to higher self-concept clarity for athletes during the transition process, and avoidance coping lead to a negative effect on athlete self-concept clarity. FSU_2017SP_Cologgi_fsu_0071E_13694 Atmospheric Mercury Wet Deposition along the Northern Gulf of Mexico: Seasonal and Storm-Type Drivers of Deposition Patterns and Contributions from Local and Regional Emissions. Krishnamurthy, Nishanth, Landing, William M., Miller, Thomas E., Holmes, Christopher D., Fuelberg, Henry E., Salters, Vincent J. M., Florida State University, College of Arts... Show moreKrishnamurthy, Nishanth, Landing, William M., Miller, Thomas E., Holmes, Christopher D., Fuelberg, Henry E., Salters, Vincent J. M., Florida State University, College of Arts and Sciences, Department of Earth, Ocean and Atmospheric Science Continuous event-based rainfall samples were collected at three sites throughout the Pensacola airshed from 2005 - 2011. Samples were analyzed for total mercury (Hg), a suite of trace metals (TMs), and major ions in order to understand how thunderstorms affected their wet deposition and concentrations in rainfall, estimate the contributions from regional coal combustion and other anthropogenic sources to Hg and TMs in rainfall along the Gulf Coast, and investigate the possible influence that a... Show moreContinuous event-based rainfall samples were collected at three sites throughout the Pensacola airshed from 2005 - 2011. Samples were analyzed for total mercury (Hg), a suite of trace metals (TMs), and major ions in order to understand how thunderstorms affected their wet deposition and concentrations in rainfall, estimate the contributions from regional coal combustion and other anthropogenic sources to Hg and TMs in rainfall along the Gulf Coast, and investigate the possible influence that a local 950 megawatt coal-fired power plant had on rainfall chemistry in the Pensacola airshed. Mercury was measured with a Tekran 2600 using a method that was a variation of the standard method used by the US Environmental Protection Agency (EPA) to measure total Hg in water which allowed for the analysis of TMs from the same bottle without having to worry about contamination from reagents during sample preparation. Trace metals were measured used an Agilent 7500cs quadrupole inductively coupled plasma mass spectrometer (ICP-MS) while utilizing an octopole reaction cell (ORC) which allowed for the detection of key coal-combustion tracers like arsenic (As) and selenium (Se). Our findings show that summertime rainfall Hg concentrations are higher compared to other months despite higher rainfall amounts. In contrast, other measured pollutant TMs and ions did not show a consistent seasonal pattern. By incorporating Automated Surface Observing System data from nearby Pensacola International Airport and WSR-88D radar data from Eglin Air Force Base, we were able to classify the storm type (thunderstorms or non-thunderstorms) and analyze altitudes of hydrometeor formation for individual rain events. This showed that mid-altitude and high-altitude composite reflectivity radar values were higher for both thunderstorm and non-thunderstorm "warm season" (May - Sept) rain events compared to "cool season" (Oct - Apr) events including cool season thunderstorms. Thus, warm season events can scavenge more soluble reactive gaseous Hg from the free troposphere. Two separate multiple linear regression analyses were conducted on log-transformed data using interaction and non-interaction terms to understand the relationship between precipitation depth, season, and storm-type on sample concentrations. The regressions without interaction terms showed that the washout coefficients for more volatile TMs like Hg and Se were less pronounced compared to other pollution-type elements and that their concentrations were therefore less diluted for a given increase in precipitation depth, but otherwise displayed similar coefficients for season and storm-type. The regression model with interaction terms revealed a more interesting dynamic where thunderstorms caused enhanced Hg concentrations in rainfall regardless of season or precipitation depth while showing a more volume-dependent relationship with TM concentrations as concentrations increased with increasing rainfall amounts relative to non-thunderstorm events. This suggests a vacuum cleaner effect such that for increasing storm strength, non-Hg aerosol TMs in the boundary layer are further entrained into a storm cell. With this understanding, a positive matrix factorization (PMF) analysis was conducted using the EPA PMF 5.0 software to estimate the contribution of different sources to Hg deposition. Our results suggest that approximately 84% (72 - 89%; 95% CI) of Hg in rainfall along the northern Gulf of Mexico is due to long-range transport from distant sources while a negligible amount (0 - 21%; 95% CI) comes from regional coal combustion. However, we found that anthropogenic sources like regional coal combustion and ore smelting were significant contributors to rainfall concentrations of other pollution-type TMs like copper, zinc, As, Se, and non-sea salt SO42-. Using modeled wind profiles via the HYSPLIT trajectory model, we assessed whether plumes from a local coal-fired power plant ("Plant Crist") could be detected in the rainfall chemistry of rain events occurring downwind of the plant. We limit this analysis to cool season rain events between June 2007 (when the model began) and December 2011 (when the study ended) because modeled wind profiles showed better agreement with observations during this time period compared to the warm season. We also limit this analysis to cool season events since the spatial distribution of rainfall throughout the area is more even during this time which makes sample comparisons between sites easier since Hg/TM concentrations are affected by precipitation depth. Furthermore, we focus on Hg and other pollution-type TMs and major ions such as As, Se, and non-sea salt SO42- in this analysis as they serve as tracers of coal combustion. For our "unpaired-site" analysis, we analyzed, for each individual site, the rainfall chemistry in a given sample as a function of the proportion of rain events associated with that sample that occurred downwind of Plant Crist. Using this method, we were not able to find evidence that the plant had a significant influence on Hg/TM concentrations or Hg/TM:Al enrichment ratios in rainfall. Similarly, for our "paired-site" analysis, we consider the differences in rainfall chemistry between two sites - an upwind and downwind site pair - that were impacted by the same rain event where the downwind site was exposed to plumes from Plant Crist while the upwind site was not. As with the unpaired-site analysis, we did not find significant differences in the rainfall chemistry between upwind-downwind site pairs with regards to sample concentrations or enrichment ratios. A multiple linear regression analysis was then conducted using interaction terms to understand the relationship between the operation of a wet flue-gas desulfurization system (which began operation at the plant during the middle of the study), the relative exposure a rainfall sample had to the plumes coming from the plant, and the log-transformed precipitation depth on log-transformed sample concentrations. Besides for As, the first regression analysis did not find coefficient values of any statistical significance for any of the variables that would indicate that the scrubber affected the rainfall chemistry at the two urban sites nearest to the plant. The calculations for As gave mixed results as the coefficients for the non-interaction terms suggested that the scrubber and the plumes emanating from Plant Crist affected the concentration of As in rainfall while the interaction terms suggested that they did not. We perform another multiple linear regression analysis, but remove the complicating effects of precipitation depth on Hg/TM concentrations and instead analyze the effects that the scrubber and the plumes coming from the plant might have had on Hg/TM:Al ratios. Again, these results were inconclusive as the regression coefficients suggested that the scrubber helped reduce Hg and TM emissions from the plant while also suggesting that samples with more exposure to the plant's plumes had lower enrichment ratios. We propose that we were unable to detect a chemical signal from Plant Crist in our rain samples due to a few possible reasons including quick scavenging of TMs from the plume at the onset of a rain event before reaching our sites, the reliance on radar data to determine start and stop times for rain events at the sites as opposed to on-site measurements, and relatively low spatiotemporal resolution for the wind trajectory model. 2018_Su_Krishnamurthy_fsu_0071E_14732_comp Automated One-Loop QCD and Electroweak Calculations with NLOX. Honeywell, Steven Joseph, Reina, Laura, Aluffi, Paolo, Owens, Joseph F., Roberts, Winston, Yohay, Rachel, Florida State University, College of Arts and Sciences, Department of... Show moreHoneywell, Steven Joseph, Reina, Laura, Aluffi, Paolo, Owens, Joseph F., Roberts, Winston, Yohay, Rachel, Florida State University, College of Arts and Sciences, Department of Physics We introduce a new framework, NLOX, in which one-loop QCD and electroweak corrections to Standard Model processes can be automatically calculated. Within this framework, we calculate the first order of electroweak corrections to the hadronic production of Z + 1b-jet and discuss some of the most relevant theoretical issues related to this process. FSU_2017SP_Honeywell_fsu_0071E_13868 Music (83) + - Study and teaching (58) + - History (47) + - Statistics (40) + - Psychology (38) + - Mathematics (31) + - Education (30) + - Materials science (28) + - Chemistry (27) + - Sociology (26) + - Education, Higher (23) + - Applied mathematics (21) + - Art (21) + - Criminology (21) + - Biology (20) + - Educational psychology (20) + - Neurosciences (18) + - Computer science (17) + - Engineering (17) + - Instruction and study (17) + - Clinical psychology (16) + - Condensed matter (16) + - Electrical engineering (16) + - Physics (16) + - Economics (14) + - Performing arts (14) + - Mechanical engineering (13) + - Religion (12) + - Design (11) + - Latin America (3) + - College of Arts and Sciences (364) + - College of Music (93) + - College of Social Sciences and Public Policy (60) + - Department of English (48) + - College of Engineering (45) + - Department of Chemistry and Biochemistry (42) + - Department of Mathematics (39) + - Department of Psychology (38) + - Department of Statistics (36) + - Schwartz, Robert A. (29) + - Department of Physics (28) + - College of Human Sciences (26) + - Department of Biological Science (26) + - FAMU-FSU College of Engineering (23) + - College of Fine Arts (22) + - Department of Mechanical Engineering (22) + - College of Criminology and Criminal Justice (20) + - Department of Sociology (20) + - Shatruk, Mykhailo (20) + - Department of Religion (18) + - Ökten, Giray (18) + - Clary, Richard (17) + - College of Business (17) + - Dennen, Vanessa P. (17) + - Department of Electrical and Computer Engineering (17) + - Moore, Christopher (17) + - Set of related objects (17) + -
CommonCrawl
Cosmic Amorphous Dust Model as the Origin of Anomalous Microwave Emission Bibcode 2020ApJ...900L..40N 10.3847/2041-8213/abb29d Nashimoto, Masashi; Hattori, Makoto; Poidevin, Frédérick; Génova-Santos, Ricardo Bibliographical reference Advertised on: We have shown that the thermal emission of the amorphous dust composed of amorphous silicate dust (a-Si) and amorphous carbon dust (a-C) provides an excellent fit both to the observed intensity and the polarization spectra of molecular clouds. The anomalous microwave emission (AME) originates from the resonance transition of the two-level systems attributed to the a-C with an almost spherical shape. On the other hand, the observed polarized emission in submillimeter wave bands is coming from a-Si. By taking into account a-C, the model prediction of the polarization fraction of the AME is reduced dramatically. Our model prediction of the 3σ lower limits of the polarization fraction of the Perseus and W 43 molecular clouds at 17 GHz are 8.129 × 10-5 and 8.012 × 10-6, respectively. The temperature dependence of the heat capacity of a-C shows the peculiar behavior compared with that of a-Si. So far, the properties of a-C are unique to interstellar dust grains. Therefore, we coin our dust model as the cosmic amorphous dust model. Anisotropy of the Cosmic Microwave Background The general goal of this project is to determine and characterize the spatial and spectral variations in the temperature and polarisation of the Cosmic Microwave Background in angular scales from several arcminutes to several degrees. The primordial matter density fluctuations which originated the structure in the matter distribution of the present Rebolo López Formation and Evolution of Galaxies: Observations in Infrared and other Wavelengths This IAC research group carries out several extragalactic projects in different spectral ranges, using space as well as ground-based telescopes, to study the cosmological evolution of galaxies and the origin of nuclear activity in active galaxies. The group is a member of the international consortium which built the SPIRE instrument for the Pérez Fournon Refereed Interstellar dust Radio continuum emission Molecular clouds Cosmic microwave background radiation Cosmology & Astroparticles (CYA, CTA) Formation & Evolution of Galaxies (FYEG) The impact of two massive early accretion events in a Milky Way-like galaxy: repercussions for the buildup of the stellar disc and halo We identify and characterize a Milky Way-like realization from the Auriga simulations with two consecutive massive mergers $\sim 2$ Gyr apart at high redshift, comparable to the reported Kraken and Gaia-Sausage-Enceladus. The Kraken-like merger (z = 1.6, $M_{\rm Tot}=8\times 10^{10}\, \rm {M_{\odot }}$) is gas-rich, deposits most of its mass in the Orkney, Matthew D. A. et al. 2022MNRAS.517L.138O The Circular Polarization of the Mn 1 Resonance Lines around 280 nm for Exploring Chromospheric Magnetism We study the circular polarization of the Mn I resonance lines at 279.56, 279.91, and 280.19 nm (hereafter, UV multiplet) by means of radiative transfer modeling. In 2019, the CLASP2 mission obtained unprecedented spectropolarimetric data in a region of the solar ultraviolet including the Mg II h and k resonance lines and two lines of a subordinate del Pino Alemán, Tanausú et al. 2022ApJ...940...78D Euclid preparation. XXI. Intermediate-redshift contaminants in the search for z > 6 galaxies within the Euclid Deep Survey Context. The Euclid mission is expected to discover thousands of z > 6 galaxies in three deep fields, which together will cover a ∼50 deg2 area. However, the limited number of Euclid bands (four) and the low availability of ancillary data could make the identification of z > 6 galaxies challenging. Aims: In this work we assess the degree of van Mierlo, S. E. et al. 2022A&A...666A.200V
CommonCrawl
Electrical A2Z Complete Electrical Guide DC Machines Motors Control Home / Electrical Circuits / Low Pass and High Pass Filter Bode Plot Low Pass and High Pass Filter Bode Plot Frequency response plots of linear systems are often displayed in the form of logarithmic plots, called Bode plots after the mathematician Hendrik W. Bode, where the horizontal axis represents frequency on a logarithmic scale (base 10) and the vertical axis represents either the amplitude or phase of the frequency response function. In Bode plots the amplitude is expressed in units of decibels (dB), where \[\begin{matrix}{{\left| \frac{{{A}_{0}}}{{{A}_{i}}} \right|}_{dB}}=20{{\log }_{10}}\left| \frac{{{A}_{0}}}{{{A}_{i}}} \right|=20{{\log }_{10}}\left| \frac{{{A}_{0}}}{{{A}_{i}}} \right| & {} & (1) \\\end{matrix}\] While logarithmic plots may at first seem a daunting complication, they have two significant advantages: 1. The product of terms in a frequency response function becomes a sum of terms because log(ab/c) = log(a) + log(b) − log(c). The advantage here is that Bode (logarithmic) plots can be constructed from the sum of individual plots of individual terms. Moreover, there are only four distinct types of terms present in any frequency response function: a. A constant K. b. Poles or zeros "at the origin"(jω). c. Simple poles or zeros (1 + jωτ) or (1 + jω/ωo). d. Quadratic poles or zeros [1+ jωτ +(jω/ωn)2]. 2. The individual Bode plots of these four distinct terms are all well approximated by linear segments, which are readily summed to form the overall Bode plot of more complicated frequency response functions. RC Low-Pass Filter Bode Plots Consider the RC low-pass filter. The frequency response function is: \[\frac{{{V}_{0}}}{{{V}_{i}}}(j\omega )=\frac{1}{1+j\omega /{{\omega }_{0}}}=\frac{1}{\sqrt{1+{{(\omega /{{\omega }_{0}})}^{2}}}}\angle -{{\tan }^{-1}}\left( \frac{\omega }{{{\omega }_{0}}} \right)\begin{matrix}{} & (2) \\\end{matrix}\] where the circuit time constant is τ = RC = 1/ω0 and ω0 is the cutoff, or half- power, frequency of the filter. This frequency response function has a constant of value K = 1 and a simple pole with cutoff frequency ω0 = 1/τ = 1/RC. Figure 1 shows the Bode magnitude and phase plots for the filter. Figure 1 Bode plots for a low-pass RC filter; the frequency variable is normalized to ω/ω0. (a) Magnitude response; (b) phase angle response The normalized frequency on the horizontal axis is ωτ. The magnitude plot is obtained from the logarithmic form of the absolute value of the frequency response function. \[{{\left| \frac{{{V}_{0}}}{{{V}_{i}}} \right|}_{dB}}=20{{\log }_{10}}\frac{\left| K \right|}{\left| 1+j\omega \tau \right|}=20{{\log }_{10}}\frac{\left| K \right|}{\left| 1+j\omega /{{\omega }_{0}} \right|}\begin{matrix}{} & (3) \\\end{matrix}\] When ω ≪ ω0, the imaginary part of the simple pole is much smaller than its real part, such that |1 + jω/ω0| ≈ 1. Then: \[\begin{matrix}{{\left| \frac{{{V}_{0}}}{{{V}_{i}}} \right|}_{dB}}\approx 20{{\log }_{10}}K-20{{\log }_{10}}1=20{{\log }_{10}}K & (dB) & (4) \\\end{matrix}\] Thus, at very low frequencies (ω ≪ ω0), equation 3 is well approximated by a straight line of zero slope, which is the low-frequency asymptote of the Bode magnitude plot. When ω ≫ ω0, the imaginary part of the simple pole is much larger than its real part, such that |1 + jω/ω0| ≈ | jω/ω0| = (ω/ω0). Then: \[\begin{matrix}{{\left| \frac{{{V}_{0}}}{{{V}_{i}}} \right|}_{dB}}\approx 20{{\log }_{10}}K-20{{\log }_{10}}\frac{\omega }{{{\omega }_{0}}} & {} & {} \\{} & {} & (5) \\\approx 20{{\log }_{10}}K-20{{\log }_{10}}\omega +20{{\log }_{10}}{{\omega }_{0}} & {} & {} \\\end{matrix}\] Thus, at very high frequencies (ω ≫ ω0), equation 3 is well approximated by a straight line of −20 dB per decade slope that intercepts the log ω axis at log ω0. This line is the high-frequency asymptote of the Bode magnitude plot. A decade represents a factor of 10 change in frequency. Thus, a one decade increase in ω is equivalent to a unity change in log ω. Finally, when ω = ω0, the real and imaginary parts of the simple pole are equal, such that |1 + jω/ω0| = |1 + j| = $\sqrt{2}$ . Then equation 3 becomes: \[20{{\log }_{10}}\frac{\left| K \right|}{\left| 1+j\omega /{{\omega }_{0}} \right|}=20{{\log }_{10}}K-20\log \sqrt{2}=20{{\log }_{10}}K-3dB\begin{matrix}{} & (6) \\\end{matrix}\] Thus, the Bode magnitude plot of a first-order low-pass filter is approximated by two straight lines intersecting at ω0. Figure 1(a) clearly shows the approximation. The actual Bode magnitude plot is 3 dB lower than the approximate plot at ω = ωo, the cutoff frequency. The phase angle of the frequency response function $\angle \left( \frac{{{V}_{o}}}{{{V}_{i}}} \right)=-{{\tan }^{-1}}\left( \frac{\omega }{{{\omega }_{o}}} \right)$ has the following properties: As a first approximation, the phase angle can be represented by three straight lines: For ω < 0.1ωo, ∠ (Vo/Vi) ≈ 0. For 0.1 ωo and 10ωo, ∠ (Vo/Vi) ≈ – (π/4) log (10ω/ωo). For ω > 10ωo, ∠ (Vo/Vi) ≈ –pi/2. These straight-line approximations are illustrated in Figure 1(b). Table 1 lists the differences between the actual and approximate Bode magnitude and phase plots. Note that the maximum difference in magnitude is 3 dB at the cutoff frequency; thus, the cutoff is often called the 3-dB frequency or the half-power frequency. Table 1 Correction factors for asymptotic approximation of first-order filter ω/ω0 Magnitude response error, (dB) Phase response error (deg) 0.1 0 −5.7 0.5 −1 4.9 1 −3 0 2 −1 −4.9 10 0 +5.7 RC High-Pass Filter Bode Plots The case of an RC high-pass filter is analyzed in the same manner as was done for the RC low-pass filter. The frequency response function is: \[\begin{matrix}\frac{{{V}_{0}}}{{{V}_{0}}}=\frac{j\omega CR}{1+j\omega CR}=\frac{j(\omega /{{\omega }_{0}})}{1+j(\omega /{{\omega }_{0}})} & {} & {} \\=\frac{(\omega /{{\omega }_{0}})\angle (\pi /2)}{\sqrt{1+{{(\omega /{{\omega }_{0}})}^{2}}}\angle \arctan (\omega /{{\omega }_{0}})} & {} & (7) \\=\frac{\omega /{{\omega }_{0}}}{\sqrt{1+{{(\omega /{{\omega }_{0}})}^{2}}}}\angle \left( \frac{\pi }{2}-\arctan \frac{\omega }{{{\omega }_{0}}} \right) & {} & {} \\\end{matrix}\] Figure 2 depicts the Bode plots for equation 7, where the horizontal axis indicates the normalized frequency ω/ω0. Straight-line asymptotic approximations may again be determined easily at low and high frequencies. The results are very similar to those for the first-order low-pass filter. For ω < ω0, the Bode magnitude approximation intercepts the origin (ω = 1) with a slope of +20 dB/decade. For ω > ω0, the Bode magnitude approximation is 0dB with zero slope. Figure 2 Bode plots for RC high-pass filter. (a) Magnitude response; (b) phase response The straight- line approximations of the Bode phase plot are: For ω < 0.1ωo, ∠ (Vo/Vi) ≈ π/2. 2. For 0.1ωo and 10ωo, ∠ (Vo/Vi) ≈ − (π/4) log (10ω/ωo). For ω > 10ωo, ∠ (Vo/Vi) ≈ 0. Bode Plots of Higher-Order Filters Bode plots of high-order systems may be obtained by combining Bode plots of factors of the higher-order frequency response function. Let, for example, \[H (j\omega )={{H }_{1}}(j\omega ){{H }_{2}}(j\omega ){{H }_{3}}(j\omega )\begin{matrix}{} & {} & ( \\\end{matrix}8)\] which can be expressed, in logarithmic form, as \[{{\left| H (j\omega ) \right|}_{dB}}={{\left| {{H }_{1}}(j\omega ) \right|}_{dB}}+{{\left| {{H }_{2}}(j\omega ) \right|}_{dB}}+{{\left| {{H }_{3}}(j\omega ) \right|}_{dB}}\begin{matrix}{} & {} & ( \\\end{matrix}9)\] \[\angle H (j\omega )=\angle {{H }_{1}}(j\omega )+\angle {{H }_{2}}(j\omega )+\angle {{H }_{3}}(j\omega )\begin{matrix}{} & {} & ( \\\end{matrix}10)\] Consider as an example the frequency response function \[H (j\omega )=\frac{j\omega +5}{(j\omega +10)(j\omega +100)}\begin{matrix}{} & {} & (11) \\\end{matrix}\] The first step in computing the asymptotic approximation consists of factoring each term in the expression so that it appears in the form ai ( jω/ωi +1), where the frequency ωi corresponds to the appropriate 3-dB frequency, ω1, ω2, or ω3. For example, the function of equation 11 is rewritten as: \[\begin{matrix}H (j\omega )=\frac{5(j\omega /5+1)}{10(j\omega /10+1)100(j\omega /100+1)} & {} & {} \\{} & {} & (12) \\\frac{0.005(j\omega /5+1)}{10(j\omega /10+1)100(j\omega /100+1)}=\frac{K(j\omega /{{\omega }_{1}}+1)}{(j\omega /{{\omega }_{2}}+1)(j\omega /{{\omega }_{3}}+1)} & {} & {} \\\end{matrix}\] Equation 12 can now be expressed in logarithmic form: \[\begin{matrix}H (j\omega ){{\left| _{dB}=\left| 0.005 \right| \right.}_{dB}}+\left| \frac{j\omega }{5}+1 \right|-\left| \frac{j\omega }{10}+1 \right|-\left| \frac{j\omega }{100}+1 \right| & {} & {} \\{} & {} & (13) \\\angle H (j\omega )=\angle 0.005+\angle \left( \frac{j\omega }{5}+1 \right)-\angle \left( \frac{j\omega }{10}+1 \right)-\angle \left( \frac{j\omega }{5}+1 \right) & {} & {} \\\end{matrix}\] Each of the terms in the logarithmic magnitude expression can be plotted individually. The constant corresponds to the value −46 dB, plotted in Figure 3(a) as a line of zero slope. The numerator term, with a 3-dB frequency ω1 = 5, is expressed in the form of the first-order Bode plot of Figure 1(a), except for the fact that the slope of the line leaving the zero axis at ω1 = 5 is +20 dB/decade; each of the two denominator factors is similarly plotted as lines of slope −20 dB/decade, departing the zero axis at ω2 = 10 and ω3 = 100. You see that the individual factors are very easy to plot by inspection once the frequency response function has been normalized in the form of equation 9. If we now consider the phase response portion of equation 13, we recognize that the first term, the phase angle of the constant, is always zero. The numerator first-order term, on the other hand, can be approximated, that is, by drawing a straight line starting at 0.1ω1 =0.5, with slope +π/4rad/decade (positive because this is a numerator factor) and ending at 10ω1 = 50, where the asymptote +π/2 is reached. The two denominator terms have similar behavior, except for the fact that the slope is −π/4 and that the straight line with slope −π/4 rad/decade extends between the frequencies 0.1ω2 and 10ω2, and 0.1ω3 and 10ω3, respectively. Figure 3 depicts the asymptotic approximations of the individual factors in equation 13, with the magnitude factors shown in Figure 3(a) and the phase factors in Figure 3(b). When all the asymptotic approximations are combined, the complete frequency response approximation is obtained. Figure 4 depicts the results of the asymptotic Bode approximation when compared with the actual frequency response functions. Figure 3 Bode plot approximation for a second-order frequency response function. (a) Straight-line approximation of magnitude response; (b) straight-line approximation of phase angle response Figure 4 Comparison of Bode plot approximation with the actual frequency response function. (a) Magnitude response of second-order frequency response function; (b) phase angle response of second-order frequency response function. You can see that once a frequency response function is factored into the appropriate form, it is relatively easy to sketch a good approximation of the Bode plot, even for higher-order frequency response functions. Bode Plot Step by Step Guide This section illustrates the Bode plot asymptotic approximation construction procedure. The method assumes that there are no complex conjugate factors in the response and that both the numerator and denominator can be factored into first-order terms with real roots. 1. Express the frequency response function in factored form, resulting in an expression similar to the following: \[H\left( j\omega \right)=\frac{K\left( {j\omega }/{{{\omega }_{1}}+1}\; \right)\cdots \left( {j\omega }/{{{\omega }_{m}}+1}\; \right)}{\left( {j\omega }/{{{\omega }_{m+1}}+1}\; \right)\cdots \left( {j\omega }/{{{\omega }_{n}}+1}\; \right)}\] 2. Select the appropriate frequency range for the semi logarithmic plot, extending at least a decade below the lowest 3-dB frequency and a decade above the highest 3-dB frequency. 3. Sketch the magnitude and phase response asymptotic approximations for each of the first-order factors, using the techniques illustrated in Figures 1 to 4. 4. Add, graphically, the individual terms to obtain a composite response. 5. If desired, apply the correction factors of Table 1. About Ahmed Faizan Mr. Ahmed Faizan Sheikh, M.Sc. (USA), Research Fellow (USA), a member of IEEE & CIGRE, is a Fulbright Alumnus and earned his Master's Degree in Electrical and Power Engineering from Kansas State University, USA. @faiizan81 Previous Band Pass Filter Frequency Response Next Instantaneous and Average Power Formula Voltaic Cell Uses for Electromagnets Magnetic Field around a Current-Carrying Conductor Electrical Units and Metric Prefixes Want create site? Find Free WordPress Themes and plugins.Metric Prefixes Most often measurements made on …
CommonCrawl
Coordinate systems Definition and 25 Discussions In geometry, a coordinate system is a system that uses one or more numbers, or coordinates, to uniquely determine the position of the points or other geometric elements on a manifold such as Euclidean space. The order of the coordinates is significant, and they are sometimes identified by their position in an ordered tuple and sometimes by a letter, as in "the x-coordinate". The coordinates are taken to be real numbers in elementary mathematics, but may be complex numbers or elements of a more abstract system such as a commutative ring. The use of a coordinate system allows problems in geometry to be translated into problems about numbers and vice versa; this is the basis of analytic geometry. SH2372 General Relativity (1): Euclidean space and coordinate systems Orodruin Basis vectors Coordinate systems Euclidean space Category: Relativity I Coordinate Systems that are less common than Cartesian, Polar, Cylindrical & Spherical? I have come across Cartesian, Polar, Cylindrical & Spherical Coordinate Systems so far and was wondering if someone could tell me which are the uncommon systems used in physics which everyone says that they exist but no one explicitly mentions. Is there a "standard reference" or are they just... Falgun coordinate geometry coordinate systems coordinate transformation Forum: Classical Physics Frame indifference and stress tensor in Newtonian fluids During lecture today, we were given the constitutive equation for the Newtonian fluids, i.e. ##T= - \pi I + 2 \mu D## where ##D=\frac{L + L^T}{2}## is the symmetric part of the velocity gradient ##L##. Dimensionally speaking, this makes sense to me: indeed the units are the one of a pressure... bobinthebox cauchy stress continuum mechanics coordinate systems newtonian fluid stress A BMS coordinates near future null infinity Let us consider Ashtekar's definition of asymptotic flatness at null infinity: I want to see how to construct the so-called Bondi coordinates ##(u,r,x^A)## in a neighborhood of ##\mathcal{I}^+## out of this definition. In fact, a distinct approach to asymptotic flatness already starts with... leo. coordinate systems differential geometry Forum: Special and General Relativity Distinguishing between angular bisectors Homework Statement :[/B] The following expression stands for the two angular bisectors for two lines : \frac{a_{1}x+b_{1}y+c_{1}}{\sqrt{a_{1}^{2}+b_{1}^{2}}}=\pm \frac{a_{2}x+b_{2}y+c_{2}}{\sqrt{a_{2}^{2}+b_{2}^{2}}}\qquad Homework Equations The equations for the two lines are : ##a_1x +... analytic geometry coordinate systems geometry Forum: Precalculus Mathematics Homework Help I About spacetime coordinate systems Hi, There is a point that, in my opinion, is not quite emphasized in the context of general relativity. It is the notion of spacetime coordinate systems that from the very foundation of general relativity are assumed to be all on the same footing. Nevertheless I believe each of them has to be... coordinate systems general relativity schwartzchild metric spacetime curvature spacetime metric A Construction of Bondi Coordinates on general spacetimes I'm trying to understand the BMS formalism in General Relativity and I'm in doubt with the so-called Bondi Coordinates. In the paper Lectures on the Infrared Structure of Gravity and Gauge Theories Andrew Strominger points out in section 5.1 the following: In the previous sections, flat... coordinate systems differential geometry general relaivity General relativity- Coordinate/metric transformations Homework Statement Consider the metric ds2=(u2-v2)(du2 -dv2). I have to find a coordinate system (t,x), such that ds2=dt2-dx2. The same for the metric: ds2=dv2-v2du2. Homework Equations General coordinate transformation, ds2=gabdxadxb The Attempt at a Solution I started with a general... jgarrel coordinate systems coordinate transformations differential geometry general relativity Natural basis and dual basis of a circular paraboloid Hi everyone! I'm trying to obtain the natural and dual basis of a circular paraboloid parametrized by: $$x = \sqrt U cos(V)$$ $$y = \sqrt U sen(V)$$ $$z = U$$ with the inverse relationship: $$V = \arctan \frac{y}{x}$$ $$U = z$$ The natural basis is: $$e_U = \frac{\partial \overrightarrow{r}}... coordinate systems coordinate transformation differential geometry tensor calculus Forum: Calculus and Beyond Homework Help I Amplitudes of Fourier expansion of a vector as the generalized coordinates When discussing about generalized coordinates, Goldstein says the following: "All sorts of quantities may be impressed to serve as generalized coordinates. Thus, the amplitudes in a Fourier expansion of vector(rj) may be used as generalized coordinates, or we may find it convenient to employ... RickRazor classical mechanics coordinate systems goldstein I Metric Tensor as Simplest Math Object for Describing Space I've been reading Fleisch's "A Student's Guide to Vectors and Tensors" as a self-study, and watched this helpful video also by Fleisch: Suddenly co-vectors and one-forms make more sense than they did when I tried to learn the from Schutz's GR book many years ago. Especially in the video... NaiveBayesian coordinate systems general relativity metric tensor I How this definition of a reference frame is used? In the book General Relativity for Mathematicians by Sachs and Wu, an observer is defined as a timelike future pointing worldline and a reference frame is defined as a timelike, future pointing vector field Z. In that sense a reference frame is a collection of observers, since its integral lines... coordinate systems definition differential geometry general relativity reference frames A How these notions relate to the usual SR approach? In the context of General Relativity spacetime is a four-dimensional Lorentzian manifold M with metric tensor g, its Levi-Civita connection \nabla and a time orientation vector field T \in \Gamma(TM). In this context I've seem the following three definitions: A coordinate system is a chart... coordinate systems general relativity reference frames special relativity I Orientation of the Earth, Sun and Solar System in the Milky Way I've been tinkering with a few diagrams in an attempt to illustrate the motion of the solar system in its journey around the Milky Way. I also wanted portray how the celestial, ecliptic and galactic coordinate systems are related to each other in a single picture. Note: in the Celestial, or... fizixfan coordinate systems earth and space ecliptic galaxy milky way Forum: Astronomy and Astrophysics I GPS data Hi there! Does anyone know where data from the GPS is available? Any data at all - positions. clock readings anything like that. Many thanks! Matter_Matters coordinate systems gps satellite.project time Forum: Other Physics Topics A A question about coordinate distance & geometrical distance As I understand it, the notion of a distance between points on a manifold ##M## requires that the manifold be endowed with a metric ##g##. In the case of ordinary Euclidean space this is simply the trivial identity matrix, i.e. ##g_{\mu\nu}=\delta_{\mu\nu}##. In Euclidean space we also have that... coordinate systems differential geometry distance measurement manifolds metric Forum: Differential Geometry A Manifolds: local & global coordinate charts I'm fairly new to differential geometry (learning with a view to understanding general relativity at a deeper level) and hoping I can clear up some questions I have about coordinate charts on manifolds. Is the reason why one can't construct global coordinate charts on manifolds in general... coordinate systems differential geometry general relativity manifolds Is polar coordinate system non inertial? Studying the acceleration expressed in polar coordinates I came up with this doubt: is this frame to be considered inertial or non inertial? (\ddot r - r\dot{\varphi}^2)\hat{\mathbf r} + (2\dot r \dot\varphi+r\ddot{\varphi}) \hat{\boldsymbol{\varphi}} (1) I do not understand what is the... Soren4 coordinate systems inertial frame polar coordinates Forum: Optics Div and curl in other coordinate systems My question is mostly about notation. I know the general definitions for divergence and curl, which can be derived from the divergence and Stokes' theorems respectively, are: \mathrm{div } \vec{E} \bigg| _P = \lim_{\Delta V \to 0} \frac{1}{\Delta V} \iint_{S} \vec{E} \cdot \mathrm{d} \vec{S}... Jezza coordinate systems curl divergence gradient vector calculus Forum: Calculus Non-Euclidean geometry and the equivalence principle As I understand it, a Cartesian coordinate map (a coordinate map for which the line element takes the simple form ##ds^{2}=(dx^{1})^{2}+ (dx^{2})^{2}+\cdots +(dx^{n})^{2}##, and for which the coordinate basis ##\lbrace\frac{\partial}{\partial x^{\mu}}\rbrace## is orthonormal) can only be... "Don't panic!" coordinate systems differential geometry metric tensor Covariant and contravariant basis vectors /Euclidean space I want ask another basic question related to this paper - http://www.tandfonline.com/doi/pdf/10.1080/16742834.2011.11446922 If I have basis vectors for a curvilinear coordinate system(Euclidean space) that are completely orthogonal to each other(basis vectors will change from point to point)... meteo student contravariant coordinate systems coordinate transformation covariant vectors A question concerning Jacobians Apologies for perhaps a very trivial question, but I'm slightly doubting my understanding of Jacobians after explaining the concept of coordinate transformations to a colleague. Basically, as I understand it, the Jacobian (intuitively) describes how surface (or volume) elements change under a... change of basis coordinate systems coordinate transformations jacobian Forum: Linear and Abstract Algebra Local parameterizations and coordinate charts I have recently had a lengthy discussion on this forum about coordinate charts which has started to clear up some issues in my understanding of manifolds. I have since been reading a few sets of notes (in particular referring to John Lee's "Introduction to Smooth Manifolds") and several of them... coordinate systems differential geometry manifold paramaterization General relativity and curvilinear coordinates I have just been asked why we use curvilinear coordinate systems in general relativity. I replied that, from a heuristic point of view, space and time are relative, such that the way in which you measure them is dependent on the reference frame that you observe them in. This implies that... coordinate systems differential geometry general relativity Choice of Origin of Coordinate Systems I am having a personal discussion with somebody elsewhere (not on Physics Forums) and we are stuck at the moment because of a disagreement that I narrowed down to the question whether, in the context of SR, two observers in different reference frames can choose the origin of their coordinate... Fantasist coordinate systems lorentz transformation
CommonCrawl
Sun, 23 Aug 2020 23:52:24 GMT 8.2: Sources in General Relativity (Part 2) [ "article:topic", "license:ccbysa", "showtoc:no" ] https://phys.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fphys.libretexts.org%2FBookshelves%2FRelativity%2FBook%253A_General_Relativity_(Crowell)%2F08%253A_Sources%2F8.02%253A_Sources_in_General_Relativity_(Part_2) Book: General Relativity (Crowell) 8: Sources Energy of Gravitational Fields Not Included in the Stress-energy Tensor Energy Conditions Summarizing the story of the Kreuzer and Bartlett-van Buren results, we find that observations verify to high precision one of the defining properties of general relativity, which is that all forms of energy are equivalent to mass. That is, Einstein's famous E = mc2 can be extended to gravitational effects, with the proviso that the source of gravitational fields is not really a scalar m but the stressenergy tensor T. But there is an exception to this even-handed treatment of all types of energy, which is that the energy of the gravitational field itself is not included in T, and is not even generally a well-defined concept locally. In Newtonian gravity, we can have conservation of energy if we attribute to the gravitational field a negative potential energy density \(− \frac{\textbf{g}^{2}}{8 \pi}\). But the equivalence principle tells us that g is not a tensor, for we can always make g vanish locally by going into the frame of a free-falling observer, and yet the tensor transformation laws will never change a nonzero tensor to a zero tensor under a change of coordinates. Since the gravitational field is not a tensor, there is no way to add a term for it into the definition of the stress-energy, which is a tensor. The grammar and vocabulary of the tensor notation are specifically designed to prevent writing down such a thing, so that the language of general relativity is not even capable of expressing the idea that gravitational fields would themselves contribute to T. exercise \(\PageIndex{2}\) Self-check: (1) Convince yourself that the negative sign in the expression \(− \frac{\textbf{g}^{2}}{8 \pi}\) makes sense, by considering the case where two equal masses start out far apart and then fall together and combine to make a single body with twice the mass. (2) The Newtonian gravitational field is the gradient of the gravitational potential \(\phi\), which corresponds in the Newtonian limit to the time-time component of the metric. With this motivation, suppose someone proposes generalizing the Newtonian energy density \(− \frac{(\nabla \phi)^{2}}{8 \pi}\) to a similar expression such as \(−(\nabla_{a} g^{a}_{b})(\nabla_{c} g^{b}_{c})\), where \(\nabla\) is now the covariant derivative, and g is the metric, not the Newtonian field strength. What goes wrong? As a concrete example, we observe that the Hulse-Taylor binary pulsar system (section 6.2) is gradually losing orbital energy, and that the rate of energy loss exactly matches general relativity's prediction of the rate of gravitational radiation. There is a net decrease in the forms of energy, such as rest mass and kinetic energy, that are accounted for in the stress energy tensor T. We can account for the missing energy by attributing it to the outgoing gravitational waves, but that energy is not included in T, and we have to develop special techniques for evaluating that energy. Those techniques only turn out to apply to certain special types of spacetimes, such as asymptotically flat ones, and they do not allow a uniquely defined energy density to be attributed to a particular small region of space (for if they did, that would violate the equivalence principle). Example 3: Gravitational energy is locally unmeasurable When a new form of energy is discovered, the way we establish that it is a form of energy is that it can be transformed to or from other forms of energy. For example, Becquerel discovered radioactivity by noticing that photographic plates left in a desk drawer along with radium salts became clouded: some new form of energy had been converted into previously known forms such as chemical energy. It is only in this limited sense that energy is ever locally observable, and this limitation prevents us from meaningfully defining certain measures of energy. For example we can never measure the local electrical potential in the same sense that we can measure the local barometric pressure; a potential of 137 volts only has meaning relative to some other region of space taken to be at ground. Let's use the acronym MELT to refer to measurement of energy by the local transformation of that energy from one form into another. The reason MELT works is that energy (or actually the momentum four-vector) is locally conserved, as expressed by the zerodivergence property of the stress-energy tensor. Without conservation, there is no notion of transformation. The Einstein field equations imply this zero-divergence property, and the field equations have been well verified by a variety of observations, including many observations (such as solar system tests and observation of the Hulse-Taylor system) that in Newtonian terms would be described as involving (non-local) transformations between kinetic energy and the energy of the gravitational field. This agreement with observation is achieved by taking T = 0 in vacuum, regardless of the gravitational field. Therefore any local transformation of gravitational field energy into another form of energy would be inconsistent with previous observation. This implies that MELT is impossible for gravitational field energy. In particular, suppose that observer A carries out a local MELT of gravitational field energy, and that A sees this as a process in which the gravitational field is reduced in intensity, causing the release of some other form of energy such as heat. Now consider the situation as seen by observer B, who is free-falling in the same local region. B says that there was never any gravitational field in the first place, and therefore sees heat as violating local conservation of energy. In B's frame, this is a nonzero divergence of the stress-energy tensor, which falsifies the Einstein field equations. We conclude this introduction to the stress-energy tensor with some illustrative examples. Example 4: A perfect fluid For a perfect fluid, we have $$T_{ab} = (\rho + P) v_{a} v_{b} - sPg_{ab}, \tag{8.1.11}\] where s = 1 for our + − −− signature or −1 for the signature − + ++, and v represents the coordinate velocity of the fluid's rest frame. Suppose that the metric is diagonal, but its components are varying, g\(\alpha \beta\) = diag(A2, −B2, . . .). The properly normalized velocity vector of an observer at (coordinate-)rest is v\(\alpha\) = (A−1, 0, 0, 0). Lowering the index gives v\(\alpha\) = (sA, 0, 0, 0). The various forms of the stress-energy tensor then look like the following: \[\begin{split} T_{00} &= A^{2} \rho \qquad \; \; \; T_{11} = B^{2} P \\ T^{0}_{0} &= s \rho \qquad \quad \; \; T^{1}_{1} = -sP \\ T^{00} &= A^{-2} \rho \qquad T^{11} = B^{-2} P \ldotp \end{split}\] Example 5: A rope dangling in a Schwarzschild spacetime Suppose we want to lower a bucket on a rope toward the event horizon of a black hole. We have already made some qualitative remarks about this idea in example 14 on p. 64. This seemingly whimsical example turns out to be a good demonstration of some techniques, and can also be used in thought experiments that illustrate the definition of mass in general relativity and that probe some ideas about quantum gravity.5 The Schwarzschild metric (section 6.2) is \[ds^{2} = t^{2} dt^{2} - t^{-2} dr^{2} + \ldots, \tag{8.1.12}\] where f = (1 − \f(\frac{2m}{r}\))1/2, and . . . represents angular terms. We will end up needing the following Christoffel symbols: \[\begin{split} \Gamma^{t}_{tr} &= \frac{f'}{f} \\ \Gamma^{\theta}_{\theta r} &= \Gamma^{\phi}_{\phi r} = r^{-1} \end{split}\] Since the spacetime has spherical symmetry, it ends up being more convenient to consider a rope whose shape, rather than being cylindrical, is a cone defined by some set of \((\theta, \phi)\). For convenience we take this set to cover unit solid angle. The final results obtained in this way can be readily converted into statements about a cylindrical rope. We let µ be the mass per unit length of the rope, and T the tension. Both of these may depend on r. The corresponding energy density and tensile stress are \(\rho = \frac{\mu}{A} = \frac{\mu}{r^{2}}\) and S = \(\frac{T}{A}\). To connect this to the stress-energy tensor, we start by comparing to the case of a perfect fluid from example 4. Because the rope is made of fibers that have stength only in the radial direction, we will have T\(\theta \theta\) = T\(\phi \phi\) = 0. Furthermore, the stress is tensile rather than compressional, corresponding to a negative pressure. The Schwarzschild coordinates are orthogonal but not orthonormal, so the properly normalized velocity of a static observer has a factor of f in it: v\(\alpha\) = (f−1, 0, 0, 0), or, lowering an index, v\(\alpha\) = (f, 0, 0, 0). The results of example 4 show that the mixed-index form of T will be the most convenient, since it can be expressed without messy factors of f. We have \[T^{\kappa}_{\nu} = diag(\rho, S, 0, 0) = r^{-2} diag(\mu, T, 0, 0) \ldotp \tag{8.1.13}\] By writing the stress-energy tensor in this form, which is independent of t, we have assumed static equilibrium outside the event horizon. Inside the horizon, the r coordinate is the timelike one, the spacetime itself is not static, and we do not expect to find static solutions, for the reasons given in example 14. Conservation of energy is automatically satisfied, since there is no time dependence. Conservation of radial momentum is expressed by \[\nabla_{\kappa} T^{\kappa}_{r} = 0, \tag{8.1.14}\] \[0 = \nabla_{r} T^{r}_{r} + \nabla_{t} T^{t}_{r} + \nabla_{\theta} T^{\theta}_{r} + \nabla_{\phi} T^{\phi}_{r} \ldotp \tag{8.1.15}\] It would be tempting to throw away all but the first term, since T is diagonal, and therefore \(T^{t}_{r} = T^{\theta}_{r} = T^{\phi}_{r} = 0\). However, a covariant derivative can be nonzero even when the symbol being differentiated vanishes identically. Writing out these four terms, we have \[\begin{split} 0 &= \partial_{r} T^{r}_{r} + \Gamma^{r}_{rr} T^{r}_{r} - \Gamma^{r}_{rr} T^{r}_{r} \\ &+ \Gamma^{t}_{tr} T^{r}_{r} - \Gamma^{t}_{tr} T^{t}_{t} \\ &+ \Gamma^{\theta}_{\theta r} T^{r}_{r} \\ &+ \Gamma^{\phi}_{\phi r} T^{r}_{r}, \end{split}\] where each line corresponds to one covariant derivative. Evaluating this, we have \[0 = T' + \frac{f'}{f} T - \frac{f'}{f} \mu, \tag{8.1.16}\] where primes denote differentiation with respect to r. Note that since no terms of the form \(\partial_{r} T^{t}_{t}\) occur, this expression is valid regardless of whether we take µ to be constant or varying. Thus we are free to take \(\rho \propto r^{−2}\), so that \(\mu\) is constant, and this means that our result is equally applicable to a uniform cylindrical rope. This result is checked using computer software in example 6. This is a differential equation that tells us how the tensile stress in the rope varies along its length. The coefficient \(\frac{f'}{f} = \frac{m}{r(r −2m)}\) blows up at the event horizon, which is as expected, since we do not expect to be able to lower the rope to or below the horizon. Let's check the Newtonian limit, where the gravitational field is g and the potential is \(\Phi\). In this limit, we have f ≈ 1 − \(\Phi\), \(\frac{f'}{f}\) ≈ g (with g > 0), and \(\mu\) >> T, resulting in \[0 = T' - g \mu \ldotp \tag{8.1.17}\] which is the expected Newtonian relation. Returning to the full general-relativistic result, it can be shown that for a loaded rope with no mass of its own, we have a finite result for \(lim_{r \rightarrow \infty}\) T, even when the bucket is brought arbitrarily close to the horizon. (The solution in this case is just T = \(\frac{T_{\infty}}{f}\), where T∞ is the tension at r = ∞.) However, this is misleading without the caveat that for \(\mu\) < T, the speed of transverse waves in the rope is greater than c, which is not possible for any known form of matter — it would violate the null energy condition, discussed in the following section. 5 Brown, "Tensile Strength and the Mining of Black Holes," arxiv.org/abs/ 1207.3342 Example 6: The rope, using computer algebra The result of example 5 can be checked with the following Maxima code: Physical theories are supposed to answer questions. For example: Does a small enough physical object always have a world-line that is approximately a geodesic? Do massive stars collapse to form black-hole singularities? Did our universe originate in a Big Bang singularity? If our universe doesn't currently have violations of causality such as the closed timelike curves exhibited by the Petrov metric (section 7.5), can we be assured that it will never develop causality violation in the future? We would like to "prove" whether the answers to questions like these are yes or no, but physical theories are not formal mathematical systems in which results can be "proved" absolutely. For example, the basic structure of general relativity isn't a set of axioms but a list of ingredients like the equivalence principle, which has evaded formal definition.6 Even the Einstein field equations, which appear to be completely well defined, are not mathematically formal predictions of the behavior of a physical system. The field equations are agnostic on the question of what kinds of matter fields contribute to the stress-energy tensor. In fact, any spacetime at all is a solution to the Einstein field equations, provided we're willing to admit the corresponding stress-energy tensor. We can never answer questions like the ones above without assuming something about the stress-energy tensor. In example 14, we saw that radiation has P = \(\frac{\rho}{3}\) and dust has P = 0. Both have \(\rho\) ≥ 0. If the universe is made out of nothing but dust and radiation, then we can obtain the following four constraints on the energy-momentum tensor: trace energy condition \(\rho - 3P \geq 0\) strong energy condition \(\rho + 3P \geq 0\; and\; \rho + P \geq 0\) dominant energy condition \(\rho \geq 0\; and\; |P| \leq \rho\) weak energy condition \(\rho \geq 0\; and\; \rho + P \geq 0\) null energy condition \(\rho + P \geq 0\) These are arranged roughly in order from strongest to weakest. They all have to do with the idea that negative mass-energy doesn't seem to exist in our universe, i.e., that gravity is always attractive rather than repulsive. With this motivation, it would seem that there should only be one way to state an energy condition: \(\rho\) > 0. But the symbols \(\rho\) and P refer to the form of the stress-energy tensor in a special frame of reference, interpreted as the one that is at rest relative to the average motion of the ambient matter. (Such a frame is not even guaranteed to exist unless the matter acts as a perfect fluid.) In this frame, the tensor is diagonal. Switching to some other frame of reference, the \(\rho\) and P parts of the tensor would mix, and it might be possible to end up with a negative energy density. The weak energy condition is the constraint we need in order to make sure that the energy density is never negative in any frame. The dominant energy condition is like the weak energy condition, but it also guarantees that no observer will see a flux of energy flowing at speeds greater than c. The strong energy condition essentially states that gravity is never repulsive; it is violated by the cosmological constant (see section 8.2). Example 7: An electromagnetic wave In example 1, we saw that dust boosted along the x axis gave a stress-energy tensor \[T_{\mu \nu} = \gamma^{2} \rho \begin{pmatrix} 1 & v \\ v & v^{2} \end{pmatrix}, \tag{8.1.18}\] where we now suppress the y and z parts, which vanish. For v → 1, this becomes \[T_{\mu \nu} = \rho' \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}, \tag{8.1.19}\] where \(\rho'\) is the energy density as measured in the new frame. As a source of gravitational fields, this ultrarelativistic dust is indistinguishable from any other form of matter with v = 1 along the x axis, so this is also the stress-energy tensor of an electromagnetic wave with local energy-density ρ 0 , propagating along the x axis. (For the full expression for the stress-energy tensor of an arbitrary electromagnetic field, see the Wikipedia article "Electromagnetic stress-energy tensor.") This is a stress-energy tensor that represents a flux of energy at a speed equal to c, so we expect it to lie at exactly the limit imposed by the dominant energy condition (DEC). Our statement of the DEC, however, was made for a diagonal stress-energy tensor, which is what is seen by an observer at rest relative to the matter. But we know that it's impossible to have an observer who, as the teenage Einstein imagined, rides alongside an electromagnetic wave on a motorcycle. One way to handle this is to generalize our definition of the energy condition. For the DEC, it turns out that this can be done by requiring that the matrix T, when multiplied by a vector on or inside the future light-cone, gives another vector on or inside the cone. A less elegant but more concrete workaround is as follows. Returning to the original expression for the T of boosted dust at velocity v, we let v = 1 + \(\epsilon\), where |\(\epsilon\)| << 1. This gives a stress-energy tensor that (ignoring multiplicative constants) looks like: \[\begin{pmatrix} 1 & 1 + \epsilon \\ 1 + \epsilon & 1 + 2 \epsilon \end{pmatrix} \ldotp \tag{8.1.20}\] If \(\epsilon\) is negative, we have ultrarelativistic dust, and we can verify that it satisfies the DEC by un-boosting back to the rest frame. To do this explicitly, we can find the matrix's eigenvectors, which (ignoring terms of order \(\epsilon^{2}\)) are (1, 1 + \(\epsilon\)) and (1, 1 − \(\epsilon\)), with eigenvalues 2 + 2\(\epsilon\) and 0, respectively. For \(\epsilon\) < 0, the first of these is timelike, the second spacelike. We interpret them simply as the t and x basis vectors of the rest frame in which we originally described the dust. Using them as a basis, the stress-energy tensor takes on the form diag(2 + 2\(\epsilon\), 0). Except for a constant factor that we didn't bother to keep track of, this is the original form of the T in the dust's rest frame, and it clearly satisfies the DEC, since P = 0. For \(\epsilon\) > 0, v = 1 + \(\epsilon\) is a velocity greater than the speed of light, and there is no way to construct a boost corresponding to −v. We can nevertheless find a frame of reference in which the stressenergy tensor is diagonal, allowing us to check the DEC. The expressions found above for the eigenvectors and eigenvalues are still valid, but now the timelike and spacelike characters of the two basis vectors have been interchanged. The stress-energy tensor has the form diag(0, 2 + 2\(\epsilon\)), with \(\rho\) = 0 and P > 0, which violates the DEC. As in this example, any flux of mass-energy at speeds greater than c will violate the DEC. The DEC is obeyed for \(\epsilon\) < 0 and violated for \(\epsilon\) > 0, and since \(\epsilon\) = 0 gives a stress-energy tensor equal to that of an electromagnetic wave, we can tell that light is exactly on the border between forms of matter that fulfill the DEC and those that don't. Since the DEC is formulated as a non-strict inequality, it follows that light obeys the DEC. Example 8: No "speed of flux" The foregoing discussion may have encouraged the reader to believe that it is possible in general to read off a "speed of energy flux" from the value of T at a point. This is not true. The difficulty lies in the distinction between flow with and without accumulation, which is sometimes valid and sometimes not. In springtime in the Sierra Nevada, snowmelt adds water to alpine lakes more rapidly than it can flow out, and the water level rises. This is flow with accumulation. In the fall, the reverse happens, and we have flow with depletion (negative accumulation). Figure 8.1.5 (1) shows a second example in which the distinction seems valid. Charge is flowing through the lightbulb, but because there is no accumulation of charge in the DC circuit, we can't detect the flow by an electrostatic measurement; the wire does not attract the tiny bits of paper below it on the table. But we know that with different measurements, we could detect the flow of charge in Figure 8.1.5 (1). For example, the magnetic field from the wire would deflect a nearby magnetic compass. This shows that the distinction between flow with and without accumulation may be sometimes valid and sometimes invalid. Flow without accumulation may or may not be detectable; it depends on the physical context. In Figure 8.1.5 (2), an electric charge and a magnetic dipole are super-imposed at a point. The Poynting vector P defined as E × B is used in electromagnetism as a measure of the flux of energy, and it tells the truth, for example, when the sun warms your sun on a hot day. In Figure 8.1.5 (2), however, all the fields are static. It seems as though there can be no flux of energy. But that doesn't mean that the Poynting vector is lying to us. It tells us that there is a pattern of flow, but it's flow without accumulation; the Poynting vector forms circular loops that close upon themselves, and electromagnetic energy is transported in and out of any volume at the same rate. We would perhaps prefer to have a mathematical rule that gave zero for the flux in this situation, but it's acceptable that our rule P = E × B gives a nonzero result, since it doesn't incorrectly predict an accumulation, which is what would be detectable. Figure_8.1.5b.png" /> Figure \(\PageIndex{5}\) Now suppose we're presented with this stress-energy tensor, measured at a single point and expressed in some units: \[T^{\mu \nu} = \begin{pmatrix} 4.037 \pm 0.002 & 4.038 \pm 0.002 \\ 4.036 \pm 0.002 & 4.036 \pm 0.002 \end{pmatrix} \ldotp \tag{8.1.21}\] To within the experimental error bars, it has the right form to be many different things: (1) We could have a universe filled with perfectly uniform dust, moving along the x axis at some ultrarelativistic speed v so great that the \(\epsilon\) in v = 1 − \(\epsilon\), as in example 7, is not detectably different from zero. (2) This could be a point sampled from an electromagnetic wave traveling along the x axis. (3) It could be a point taken from Figure 8.1.5 (2). (In cases 2 and 3, the off-diagonal elements are simply the Poynting vector.) In cases 1 and 2, we would be inclined to interpret this stress-energy tensor by saying that its off-diagonal part measures the flux of mass-energy along the x axis, while in case 3 we would reject such an interpretation. The trouble here is not so much in our interpretation of T as in our Newtonian expectations about what is or isn't observable about fluxes that flow without accumulation. In Newtonian mechanics, a flow of mass is observable, regardless of whether there is accumulation, because it carries momentum with it; a flow of energy, however, is undetectable if there is no accumulation. The trouble here is that relativistically, we can't maintain this distinction between mass and energy. The Einstein field equations tell us that a flow of either will contribute equally to the stress-energy, and therefore to the surrounding gravitational field. The flow of energy in Figure 8.1.5 (2) contributes to the gravitational field, and its contribution is changed, for example, if the magnetic field is reversed. The figure is in fact not a bad qualitative representation of the spacetime around a rotating, charged black hole. At large distances, however, the gravitational effect of the off-diagonal terms in T becomes small, because they average to nearly zero over a sufficiently large spherical region. The distant gravitational field approaches that of a point mass with the same mass-energy Example 9: Momentum in static fields Continuing the train of thought described in example 8, we can come up with situations that seem even more paradoxical. In Figure 8.1.5 (2), the total momentum of the fields vanishes by symmetry. This symmetry can, however, be broken by displacing the electric charge by ∆R perpendicular to the magnetic dipole vector D. The total momentum no longer vanishes, and now lies in the direction of D × ∆R. But we have proved in example 2 that a system's center of mass-energy is at rest if and only if its total momentum is zero. Since this system's center of mass-energy is certainly at rest, where is the other momentum that cancels that of the electric and magnetic fields? Suppose, for example, that the magnetic dipole consists of a loop of copper wire with a current running around it. If we open a switch and extinguish the dipole, it appears that the system must recoil! This seems impossible, since the fields are static, and an electric charge does not interact with a magnetic dipole. Babson et al.7 have analyzed a number of examples of this type. In the present one, the mysterious "other momentum" can be attributed to a relativistic imbalance between the momenta of the electrons in the different parts of the wire. A subtle point about these examples is that even in the case of an idealized dipole of vanishingly small size, it makes a difference what structure we assume for the dipole. In particular, the field's momentum is nonzero for a dipole made from a current loop of infinitesimal size, but zero for a dipole made out of two magnetic monopoles.8 7 Am. J. Phys. 77 (2009) 826 8 Milton and Meille, arxiv.org/abs/1208.4826 6 "Theory of gravitation theories: a no-progress report," Sotiriou, Faraoni, and Liberati, http://arxiv.org/abs/0707.2748 Benjamin Crowell (Fullerton College). General Relativity is copyrighted with a CC-BY-SA license. 8.3: Cosmological Solutions (Part I)
CommonCrawl
Attractors of Hopfield-type lattice models with increasing neuronal input On an optimal control problem of time-fractional advection-diffusion equation February 2020, 25(2): 781-798. doi: 10.3934/dcdsb.2019267 Almost automorphic functions on semigroups induced by complete-closed time scales and application to dynamic equations Chao Wang 1, and Ravi P Agarwal 2,3,, Department of Mathematics, Yunnan University, Kunming, Yunnan 650091, China Department of Mathematics, Texas A & M University-Kingsville, 700 University Blvd., TX 78363-8202, Kingsville, TX, USA Distinguished University Professor of Mathematics, Florida Institute of Technology, Melbourne, FL 32901, USA * Corresponding author: Ravi P Agarwal Received December 2018 Revised April 2019 Published February 2020 Early access November 2019 Fund Project: This work is supported by Youth Fund of NSFC (No. 11601470), Tian Yuan Fund of NSFC (No. 11526181), Dong Lu youth excellent teachers development program of Yunnan University (No. wx069051), IRTSTYN and Joint Key Project of Yunnan Provincial Science and Technology Department of Yunnan University (No. 2018FY001(-014)). In this paper, we introduce the concepts of Bochner and Bohr almost automorphic functions on the semigroup induced by complete-closed time scales and their equivalence is proved. Particularly, when $ \Pi = \mathbb{R}^{+} $ (or $ \Pi = \mathbb{R}^{-} $), we can obtain the Bochner and Bohr almost automorphic functions on continuous semigroup, which is the new almost automorphic case on time scales compared with the literature [20] (W.A. Veech, Almost automorphic functions on groups, Am. J. Math., Vol. 87, No. 3 (1965), pp 719-751) since there may not exist inverse element in a semigroup. Moreover, when $ \Pi = h\mathbb{Z}^{+},\,h>0 $ (or $ \Pi = h\mathbb{Z}^{-},\,h>0 $), the corresponding automorphic functions on discrete semigroup can be obtained. Finally, we establish a theorem to guarantee the existence of Bochner (or Bohr) almost automorphic mild solutions of dynamic equations on semigroups induced by time scales. Keywords: Almost automorphic functions, semigroup, almost automorphic mild solutions, dynamic equations, time scales. Mathematics Subject Classification: Primary: 34N05, 43A60; Secondary: 26E70. Citation: Chao Wang, Ravi P Agarwal. Almost automorphic functions on semigroups induced by complete-closed time scales and application to dynamic equations. Discrete & Continuous Dynamical Systems - B, 2020, 25 (2) : 781-798. doi: 10.3934/dcdsb.2019267 R. P. Agarwal and D. O'Regan, Some comments and notes on almost periodic functions and changing-periodic time scales, Electron. J. Math. Anal. Appl., 6 (2018), 125-136. Google Scholar M. Bohner and A. Peterson, Dynamic Equations on Time Scales: An Introduction with Applications, Birkhäuser Boston, Inc., Boston, MA, 2001. doi: 10.1007/978-1-4612-0201-1. Google Scholar S. Bochner, Curvature and Betti numbers in real and complex vector bundles, Univ. e Politec. Torino Rend. Sem. Mat., 15 (2019), 225-253. Google Scholar S. Bochner, Uniform convergence of monotone sequences of functions, Proc. Nat. Acad. Sci. U.S.A., 47 (1961), 582-585. doi: 10.1073/pnas.47.4.582. Google Scholar S. Bochner, A new approach to almost periodicity, Proc. Nat. Acad. Sci. U.S.A., 48 (1962), 2039-2043. doi: 10.1073/pnas.48.12.2039. Google Scholar M. Bohner and J. G. Mesquita, Almost periodic functions in quantum calculus, Electron. J. Differential Equations, 2018, 1–11. Google Scholar Y. K. Chang and T. W. Feng, Properties on measure pseudo almost automorphic functions and applications to fractional differential equations in Banach spaces, Electron. J. Differential Equations, 2018, 1–14. Google Scholar Y. K. Chang and S. Zheng, Weighted pseudo almost automorphic solutions to functional differential equations with infinite delay, Electron. J. Differential Equations, 2016, 1–19. Google Scholar T. Diagana, Almost Automorphic Type and Almost Periodic Type Functions in Abstract Spaces, Springer, Cham, 2013. doi: 10.1007/978-3-319-00849-3. Google Scholar T. Diagana and G. M. N'Guérékata, Stepanov-like almost automorphic functions and applications to some semilinear equations, Appl. Anal., 86 (2007), 723-733. doi: 10.1080/00036810701355018. Google Scholar H. S. Ding, T. J. Xiao and J. Liang, Asymptotically almost automorphic solutions for some integrodifferential equations with nonlocal initial conditions, J. Math. Anal. Appl., 338 (2008), 141-151. doi: 10.1016/j.jmaa.2007.05.014. Google Scholar H. S. Ding and S. M. Wan, Asymptotically almost automorphic solutions of differential equations with piecewise constant argument, Open Math., 15 (2017), 595-610. doi: 10.1515/math-2017-0051. Google Scholar M. Kéré and G. M. N'Guérékata, Almost automorphic dynamic systems on time scales, PanAmer. Math. J., 28 (2018), 19-37. Google Scholar A. Milcé and J. C. Mado, Almost automorphic solutions of some semilinear dynamic equations on time scales, Int. J. Evol. Equ., 9 (2014), 217-229. Google Scholar G. Mophou, G. M. N'Guérékata and A. Milce, Almost automorphic functions of order n and applications to dynamic equations on time scales, Discrete Dyn. Nat. Soc., (2014), 1–13. doi: 10.1155/2014/410210. Google Scholar J. von Neumann, Almost periodic functions in a group, I, Trans. Amer. Math. Soc., 36 (1934), 445-492. doi: 10.1090/S0002-9947-1934-1501752-3. Google Scholar G. M. N'Guérékata, Topics in Almost Automorphy, Springer-Verlag, New York, 2005. Google Scholar G. M. N'Guérékata, Almost Automorphic and Almost Periodic Functions in Abstract Spaces, Kluwer Academic/Plenum Publishers, New York, 2001. doi: 10.1007/978-1-4757-4482-8. Google Scholar G. M. N'Guérékata and A. Pankov, Stepanov-like almost automorphic functions and monotone evolution equations, Nonlinear Anal., 68 (2008), 2658-2667. doi: 10.1016/j.na.2007.02.012. Google Scholar W. A. Veech, Almost automorphic functions on groups, Amer. J. Math., 87 (1965), 719-751. doi: 10.2307/2373071. Google Scholar C. Wang and R. P. Agarwal, Weighted piecewise pseudo almost automorphic functions with applications to abstract impulsive $\nabla$-dynamic equations on time scales, Adv. Difference Equ., 153 (2014), 1-29. doi: 10.1186/1687-1847-2014-153. Google Scholar C. Wang, R. P. Agarwal and D. O'Regan, n0-order $\Delta$-almost periodic functions and dynamic equations, Appl. Anal., 97 (2018), 2626-2654. doi: 10.1080/00036811.2017.1382689. Google Scholar C. Wang, R. P. Agarwal and D. O'Regan, Periodicity, almost periodicity for time scales and related functions, Nonauton. Dyn. Syst., 3 (2016), 24-41. doi: 10.1515/msds-2016-0003. Google Scholar C. Wang, R. P. Agarwal, D. O'Regan, C. Wang and R. P. Agarwal, Relatively dense sets, corrected uniformly almost periodic functions on time scales, and generalizations, Adv. Difference Equ., 312 (2015), 1-9. doi: 10.1186/s13662-015-0650-0. Google Scholar C. Wang, R. P Agarwal and D. O'Regan, A matched space for time scales and applications to the study on functions, Adv. Difference Equ., 305 (2017), 1-28. doi: 10.1186/s13662-017-1366-0. Google Scholar C. Wang, R. P Agarwal and D. O'Regan, Weighted piecewise pseudo double-almost periodic solution for impulsive evolution equations, J. Nonlinear Sci. Appl., 10 (2017), 3863-3886. doi: 10.22436/jnsa.010.07.41. Google Scholar C. Wang, R. P Agarwal and D. O'Regan, Π-semigroup for invariant under translations time scales and abstract weighted pseudo almost periodic functions with applications, Dynam. Systems Appl., 25 (2016), 1-28. Google Scholar C. Wang and R. P. Agarwal, Almost periodic solution for a new type of neutral impulsive stochastic Lasota-Wazewska timescale model, Appl. Math. Lett., 70 (2017), 58-65. doi: 10.1016/j.aml.2017.03.009. Google Scholar T. Xiao, J. Liang and J. Zhang, Pseudo almost automorphic solutions to semilinear differential equations in Banach spaces, Semigroup Forum, 76 (2008), 518-524. doi: 10.1007/s00233-007-9011-y. Google Scholar M. Zaki, Almost automorphic solutions of certain abstract differential equations, Ann. Mat. Pura Appl., 101 (1974), 91-114. doi: 10.1007/BF02417100. Google Scholar Z. M. Zheng and H. S. Ding, On completeness of the space of weighted pseudo almost automorphic functions, J. Funct. Anal., 268 (2015), 3211-3218. doi: 10.1016/j.jfa.2015.02.012. Google Scholar Rui Zhang, Yong-Kui Chang, G. M. N'Guérékata. Weighted pseudo almost automorphic mild solutions to semilinear integral equations with $S^{p}$-weighted pseudo almost automorphic coefficients. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 5525-5537. doi: 10.3934/dcds.2013.33.5525 Tomás Caraballo, David Cheban. Almost periodic and almost automorphic solutions of linear differential equations. Discrete & Continuous Dynamical Systems, 2013, 33 (5) : 1857-1882. doi: 10.3934/dcds.2013.33.1857 Gaston Mandata N ' Guerekata. Remarks on almost automorphic differential equations. Conference Publications, 2001, 2001 (Special) : 276-279. doi: 10.3934/proc.2001.2001.276 Hailong Zhu, Jifeng Chu, Weinian Zhang. Mean-square almost automorphic solutions for stochastic differential equations with hyperbolicity. Discrete & Continuous Dynamical Systems, 2018, 38 (4) : 1935-1953. doi: 10.3934/dcds.2018078 Mengyu Cheng, Zhenxin Liu. Periodic, almost periodic and almost automorphic solutions for SPDEs with monotone coefficients. Discrete & Continuous Dynamical Systems - B, 2021, 26 (12) : 6425-6462. doi: 10.3934/dcdsb.2021026 Aníbal Coronel, Christopher Maulén, Manuel Pinto, Daniel Sepúlveda. Almost automorphic delayed differential equations and Lasota-Wazewska model. Discrete & Continuous Dynamical Systems, 2017, 37 (4) : 1959-1977. doi: 10.3934/dcds.2017083 Bixiang Wang. Stochastic bifurcation of pathwise random almost periodic and almost automorphic solutions for random dynamical systems. Discrete & Continuous Dynamical Systems, 2015, 35 (8) : 3745-3769. doi: 10.3934/dcds.2015.35.3745 Yongkun Li, Pan Wang. Almost periodic solution for neutral functional dynamic equations with Stepanov-almost periodic terms on time scales. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 463-473. doi: 10.3934/dcdss.2017022 Denis Pennequin. Existence of almost periodic solutions of discrete time equations. Discrete & Continuous Dynamical Systems, 2001, 7 (1) : 51-60. doi: 10.3934/dcds.2001.7.51 Gaston N'Guerekata. On weak-almost periodic mild solutions of some linear abstract differential equations. Conference Publications, 2003, 2003 (Special) : 672-677. doi: 10.3934/proc.2003.2003.672 Ruichao Guo, Yong Li, Jiamin Xing, Xue Yang. Existence of periodic solutions of dynamic equations on time scales by averaging. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 959-971. doi: 10.3934/dcdss.2017050 Sung Kyu Choi, Namjip Koo. Stability of linear dynamic equations on time scales. Conference Publications, 2009, 2009 (Special) : 161-170. doi: 10.3934/proc.2009.2009.161 Yuval Z. Flicker. Automorphic forms on PGSp(2). Electronic Research Announcements, 2004, 10: 39-50. Reinhard Farwig, Yasushi Taniuchi. Uniqueness of backward asymptotically almost periodic-in-time solutions to Navier-Stokes equations in unbounded domains. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1215-1224. doi: 10.3934/dcdss.2013.6.1215 Marko Kostić. Almost periodic type functions and densities. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021008 Raegan Higgins. Asymptotic behavior of second-order nonlinear dynamic equations on time scales. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 609-622. doi: 10.3934/dcdsb.2010.13.609 Yong Li, Zhenxin Liu, Wenhe Wang. Almost periodic solutions and stable solutions for stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 5927-5944. doi: 10.3934/dcdsb.2019113 Małgorzata Wyrwas, Dorota Mozyrska, Ewa Girejko. Subdifferentials of convex functions on time scales. Discrete & Continuous Dynamical Systems, 2011, 29 (2) : 671-691. doi: 10.3934/dcds.2011.29.671 Demou Luo, Qiru Wang. Dynamic analysis on an almost periodic predator-prey system with impulsive effects and time delays. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3427-3453. doi: 10.3934/dcdsb.2020238 Chao Wang Ravi P Agarwal
CommonCrawl
Is sampling a Fourier transformed signal and fourier transforming a sampled signal the same? I'm having a hard time understanding an assignment that states: Draw the complex spectrum of the sampled signal $f(t)$ (periodic and continuous). Do this, by first calculating the Fourier transformation and sample it afterwards by multiplying with the impulse train. The way I understand it, I need to calculate $$\begin{equation*} \mathcal{F}\left[f(t) \cdot \sum_{k=-\infty}^{\infty}\delta(t-kT)\right] \end{equation*}$$ however the second sentence suggests this is the same as $$\mathcal{F}[f(t)] \cdot \sum_{k=-\infty}^{\infty}\delta(t-kT) $$ However, Wikipedia says this is not the case and instead suggests $$\begin{equation*}\mathcal{F}\left[f(t) \cdot \sum_{k=-\infty}^{\infty}\delta(t-kT)\right] = \mathcal{F}[f(t)] *\left[\frac{1}{T} \cdot \sum_{k=-\infty}^{\infty}\delta\left(t-\frac{k}{T}\right)\right] \end{equation*}$$ are maybe both formulas correct? and how do I draw a complex spectrum? is it 3D? fourier-transform sampling convolution Tendero If I number your equations (1), (2) and (3) starting from the first, then your Eq (1) is correct as it represents a conceptual model of the ideal sampling process: multiplying the continuous signal with an impulse train. Consequently, Eq (3) is also correct since multiplication in time domain is convolution in frequency domain (recall that the Fourier transform of an impulse train in time domain is also an impulse train in frequency domain with a different scaling factor). Convolving the two is what causes the spectrum to repeat at multiples of the sample rate. Your Eq (2) however is not entirely correct. Fourier transforming the signal first and sampling it later implies sampling in frequency domain. Interestingly, this should create 'aliases' in time domain, i.e., repetition of the same waveform at multiples of 'sample rate'. Why it is not completely incorrect here is that your signal is given to be continuous and periodic. A periodic signal has a sampled spectrum due to the reason described above; technically, it is said to have a Fourier series. So when you compute $F(f)$, either you should have in time domain those complex sinusoids with Fourier series coefficients, or you should have in frequency domain a frequency domain impulses with those coefficients. This might have caused confusion to whoever made the assignment. Update: A complex spectrum can be drawn in 3D with frequency, I and Q dimensions, but the most common practice (and that reveals the most information) is to draw two separate graphs: one for magnitude response against frequency and the other phase response against the frequency. Qasim ChaudhariQasim Chaudhari Not the answer you're looking for? Browse other questions tagged fourier-transform sampling convolution or ask your own question. Sampling Theorem and Dirac Comb About the fourier transform of Periodic Signal Development of Hilbert transform relationship Interpretation of a sampled signal in the frequency domain Sinc interpolation formula for signal reconstruction in frequency domain from bipolar samples implication of sampling and reconstruction theorem frequency spectrum of a sampled signal, PSD and power discussion Relation between the DTFT and CTFT in sampling- sample period isn't as the impulse train period Bridging CTFT and DTFT for a cosine Why does the Fourier Transform of the impulse look so different from the Fourier Transform of the impulse train?
CommonCrawl
Home Forums > Science > Physics & Math > Four Dot Products and of Momenta Discussion in 'Physics & Math' started by Anamitra Palit, Jan 11, 2021. Anamitra Palit Registered Member We deduce in the paper the following results v1.v2>=c^2 [v1 and v2 are four velocities] p1.p2>=m1m2c^2 [p1 and p2 are four velocities] https://drive.google.com/file/d/1yTw0x5uFs1zaT9bd1n6E7jgMvpA56k6W/view?usp=sharing By applying the reversed Cauchy Schwarz Inequality we may arrive directly at the same results Let's consider the reversed Cauchy Schwarz inequality. c^2 t1 t2-x1 x2 -y1 y2-z1 z2>=Sqrt[c^2t1^2-x1^2-y1^2-z1^2]Sqrt[c^2t2^2-x2^2-y2^2-z2^2] The equality sign holds when (t1,x1,y1,z1) and (t2,x2,y2,z2) are identical vectors Replacing x^i b y dx^i we obtain c^2 dt1 dt2-dx1dx2 -dy1 dy2-dz1 dz2>=Sqrt[c^2dt1^2-dx1^2-dy1^2-dz1^2]Sqrt[c^2dt2^2-dx2^2-dy2^2-dz2^2] A paper on the Reversed Cauchy Schwarz Inequality: https://drive.google.com/file/d/1z69d0OO4WRK6CthRln0s5LBJWbDS290f/view?usp=sharing Wikipedia Link on the Cauchy Schwarz Inequality: https://en.wikipedia.org/wiki/Minkowski_space#Norm_and_reversed_Cauchy_inequality Dividing both sides by dtau^2 we obtain Four dot product v1.v2>=c^2 Multiplying both sides by m1m2[m1 and m2 being rest masses] we obtain, m1v1.mv2>=m1 m2c^2 or,p1.p2>=m1m2 c^2 Anamitra Palit, Jan 11, 2021 Google AdSense Guest Advertisement Log in or Sign up to hide all adverts. James R Just this guy, you know? Staff Member And so? What's the point? James R, Jan 11, 2021 The formulas v1.v2>=c^2, p1.p2 >m1m2c^2 are important just like v,v=c^2 But alas.. c^2dtau^2=c^2dt^2-dx^2-dy^2-dz^2 c^2=c^2[dt/dtau]^2-[dx/dtau]^2-[dy/dtau]^2-[dz/dtau]^2 c^2=c^2v_t^2-v_x^2-v_y^2-v_z^2 (1) v_i are proper speeds and as such they can exceed the speed of light without hurting or violating Special Relativity For two proper velocities v1 and v2at the same point of the manifold.Since tensors are additive we have c^2=c^2(v1_t+v2_t)^2-(v1_x+v2_x)^2-(v1_y+v2_y)^2-(v1_z+v2_z)^2(2) or,c^2=c^2v1_t^2-v1_x^2-v1_y^2-v1_z^2+c^2v2_t^2-v2_x^2-v2_y^2-v2_z^2+2v1.v2 or, c^2=c^2+c^2+2v1.v2 v1.v2=-c^2 (3) By calculations we have arrived at an untenable result. An analogous result may be obtained in the General Relativity context Anamitra Palit, Jan 12, 2021 at 3:36 AM (in continuation) Indeed v1,v2 and v=v1+v2 all satisfy(1) and hence heir existence is certified by Special relativity or even by General Relativity for that matter. c^2dtau^2=c^2 g_tt dt^2-g_xx dx^2-g_yy dy^2-g_zz dz^2 (4) We consider transformations g_tt dt^2=dT^2, g_xxdx^2=dX^2, g_yydy^2=dY^2,g_zzdz^2=dZ^2 (5) Local or even transformations over infinitesimally small regions would suffice. Equations (4) and (5) combined gives us the flat space time metric[mathematical form of it] c^2dtau^2=c^2 dT^2- dX^2- dY^2- dZ^2 (6) All conclusions we made earlier follow. Incidentally, there is one point to take note of:the Lorentz transformations follow from (6) in a unique manner [Reference; Steve Wienberg,Gravitation and Cosmology,Chapter 2:Special Relativity] In relation to the last post We may always choose the eight unknowns unknowns:v1_i and v2_j with each i and j=1,2,3,4 , in such a manner that the next three equations hold c^2=c^2v1_t^2-v1_x^2-v1_y^2-v1_z^2 (1) c^2=c^2v2_t^2-v2_x^2-v2_y^2-v2_z^2+2v1.v2 (2) and c^2=c^2(v_1+v2_t)^2_-(v1_x+v2_x)^2-(v1_y+v2_y)^2-(v1_z+v2_z)^2 (3) Equations (1),(2) and (3) are all certified by the relation c^2=c^2v_t^2-v_x^2-v-y^2-v_z^2 which is equivalent to the Lorentz transformations as stated earlier. The three equations finally lead will lead to 2v1 .v2<=-c^2 Last edited: Jan 12, 2021 at 8:39 AM exchemist Valued Senior Member Anamitra Palit said: ↑ What is it you want to discuss? exchemist, Jan 12, 2021 at 9:03 AM Michael 345 New year. PRESENT is 71 years old Valued Senior Member exchemist said: ↑ The Inner Mind? I sense a link Please Register or Log in to view the hidden image! Michael 345, Jan 12, 2021 at 10:15 AM One has to follow the full thing The following two formulas have bee deduced 1. v1.v2>=c^2 2. p1.p2>=m1 m2c^2 Finally we discover a contradiction 2v1.v2<=-c^2 https://drive.google.com/file/d/148q5_2x8DTPLDpa48uE4QSzZ-jJtotNV/view?usp=sharing I would definitely try out for publication in some journal. Latest version of the article https://drive.google.com/file/d/1b2gBTZTYV0CC25u5H9Cd7StRW0z1sZKE/view?usp=sharing What is the significance of this contradiction? An important revision has been implemented[pl see "The Extra bit"]. Link to the revised file has been provided. I will keep the audience informed as I proceed with he article... https://drive.google.com/file/d/1cFmjI3LM7-vqG0Qib9waYG2oHLppmITJ/view?usp=sharing The very nature of the argument to bring out the contradiction has changed. Last edited: Jan 13, 2021 at 1:16 PM Anamitra Palit, Jan 13, 2021 at 1:03 PM Curried Reiku, apparently. exchemist, Jan 13, 2021 at 3:45 PM Important revisions have been made in "the Extra Bit" https://drive.google.com/file/d/1V_Ms1FfeiQKhMNk9lfqhxqr3GAcguBN5/view?usp=sharing Relevant material in Latex: \begin{equation}c^2d\tau^2=c^2dt^2-dx2-dy^2-dz^2 \end{equation} (1) \begin{equation}c^2=c^2\left(\frac{dt}{d\tau}\right)^2-\left(\frac{dx}{d\tau}\right)^2-\left(\frac{dy}{d\tau}\right)^2-\left(\frac{dz}{d\tau}\right)^2\end{equation} \begin{equation}c^2=c^2{v_t}^2-{v_x}^2-{v_y}^2-{v_z}^2\end{equation}(2) We consider two proper velocities on the same manifold \begin{equation}c^2=c^2{v_{1t}}^2-{v_{1x}}^2-{v_{1y}}^2-{v_{1z}}^2\end{equation}(3.1) Adding (3.1) and (3.2) we obtain \begin{equation}2c^2=c^2\left({v_{1t}}^2+{v_{2t}}^2\right)-\left({v_{1x}}^2+{v_{2x}}^2\right)-\left({v_{1y}}^2+{v_{2y}}^2\right)-\left({v_{1z}}^2+{v_{2z}}^2\right)\end{equation} \begin{equation}2c^2=c^2\left(v_{1t}+v_{2t}\right)^2-\left(v_{1x}+v_{2x}\right)^2-\left(v_{1y}+v_{2y}\right)^2-\left(v_{1z}+v_{2z}\right)^2-2v_1\dot v_2\end{equation} \begin{equation}2c^2+2v_1\dot v_2=c^2\left(v_{1t}+v_{2t}\right)^2-\left(v_{1x}+v_{2x}\right)^2-\left(v_{1y}+v_{2y}\right)^2-left(v_{1z}+v_{2z}\right)^2\end{equation} Since v1.v2>=c^2 we have \begin{equation}c^2\left({v_{1t}}+{v_{2t}}\right)^2-\left({v_{1x}}+{v_{2 x}}\right)^2-\left({v_{1y}}+{v_{1y}}\right)^2-\left({v_{1z}}+{v_{1z}}\right)^2\ge 4c^2\end{equation}(4) \begin{equation}\left(v_1+v2)\dot (v_1+v_2)\right) \ge 4c^2\end{equation}(5) If $v_1+v_2$ is a proper velocity then \begin{equation}c^2=c^2\left(v_{1t}+v_{2t})\right)^2-\left(v_{1x}+v_{2x})\right)^2-\left(v_{1y}+v_{2y})\right)^2-\left(v_{1z}+v_{2z})\right)^2\end{equation} (6) \begin{equation}c^2=c^2v_{1t}^2-v_{1x}^2-v_{1y}^2-v_{1z}^2+ c^2v_{1t}^2-v_{1x}^2-v_{1y}^2-v_{1z}^2+2v_1.v_2\end{equation} \begin{equation}c^2=c^2+c^2+2v_1.v_2\end{equation}(7) \begin{equation}v_1.v_2\le -½ c^2\end{equation}(8) which is not true since \begin{equation}v.v=c^2\end{equation} Therefore $$v_1+v_2$$ is not a four vector if $$v_1$$ and $$v_2$$ are four vectors Again if $$v_1-v_2$$ is a four vector then \begin{equation}c^2=c^2\left(v_{1t}-v_{2t})\right)^2-\left(v_{1x}-v_{2x})\right)^2-\left(v_{1y}-v_{2y})\right)^2-\left(v_{1z}-v_{2z})\right)^2\end{equation} (9) \begin{equation}c^2=c^2v_{1t}^2-v_{1x}^2-v_{1y}^2-v_{1z}^2+ c^2v_{1t}^2-v_{1x}^2-v_{1y}^2-v_{1z}^2-2v_1.v_2\end{equation} \begin{equation}c^2=c^2+c^2-2v_1.v_2\end{equation} \begin{equation} ½ c^2=v_1.v_2\end{equation} (10) But the above formula is not a valid one. Given two infinitesimally close four velocities their difference is not a four velocity. Therefore the manifold has to be a perforated one. The manifold indeed is a mesh of worldlines and each world line is a train of proper velocity four vectors as tangents. A particle moves along a timelike path and therefore each point on it has a four velocity as a tangent representing the motion. The manifold is discrete and that presents difficulty an impossibility to be precise with procedure like differentiation. Last edited: Jan 14, 2021 at 11:42 AM Anamitra Palit, Jan 14, 2021 at 11:14 AM [It may be necessary to refresh the page for proper viewing] We start with the norm of proper velocity[metric signature(+,-,-,-)] \begin{equation}c^2=c^2 v_t^2-v_x^2-v_y^2-v_z^2\end{equation} \begin{equation}c^2=c^2\left(\frac{dt}{d\tau}\right)^2-\left(\frac{dt}{d\tau}\right)^2-\left(\frac{dt}{d\tau}\right)^2-\left(\frac{dt}{d\tau}\right)^2\end{equation} Differentiating with respect to propertime, \begin {equation}c^2\frac{dt}{d\tau}\frac{d^2 t}{d \tau^2}-\frac{dx}{d\tau}\frac{d^2 x}{d \tau^2}-\frac{dy}{d\tau}\frac{d^2 y}{d \tau^2}-\frac{dz}{d\tau}\frac{d^2 z}{d \tau^2}=0\end {equation} (1) \begin{equation}\Rightarrow v.a=0\end{equation}(2) We choose k such that [k] =T so that ka has the dimension of velocity \begin{equation}\Rightarrow v.ka=0\end{equation}(3) We have had earlier, \begin{equation}v.v=c^2\end{equation}(4) From (3) and (4) \begin{equation}\Rightarrow v. \left(v-ka\right)=c^2\end{equation} (5) By adjusting the value [but maintaining its dimension as that of time] we always do have equation (5) If $\left(v-ka\right)=v'$ is a proper velocity then we have $v.v'=c^2$ in opposition to $v.v'>=c^2$ If $\left(v-ka\right)=v'$ is a not a proper velocity then \begin{equation}c^2\left(\frac{dt'}{d\tau'}\right)^2-\left(\frac{dx'}{d\tau'}\right)^2-\left(\frac{dy'}{d\tau'}\right)^2-\left(\frac{dz'}{d\tau'}\right)^2=c'^2 \ne c^2\end{equation} We have from the reversed Cauchy Schwarz inequality, \begin{array}{l}\left(c^2 \frac{dt}{d\tau}\frac{dt'}{d\tau'}-\frac{dx}{d\tau}\frac{dx'}{d\tau'}-\frac{dy}{d\tau}\frac{dy'}{d\tau'}-\frac{dz}{d\tau}\frac{dz'}{d\tau'}\right)^2\ge\\ \left(c^2\left(\frac{dt}{d\tau}\right)^2-\left(\frac{dx}{d\tau}\right)^2-\left(\frac{dy}{d\tau}\right)^2-\left(\frac{dz}{d\tau}\right)^2\right)\left(c^2\left(\frac{dt'}{d\tau'}\right)^2-\left(\frac{dx'}{d\tau'}\right)^2-\left(\frac{dy'}{d\tau'}\right)^2-\left(\frac{dz'}{d\tau'}\right)^2\right)\end{array}(6) or,\begin{equation}\left(c^2 \frac{dt}{d\tau}\frac{dt'}{d\tau'}-\frac{dx}{d\tau}\frac{dx'}{d\tau'}-\frac{dy}{d\tau}\frac{dy'}{d\tau'}-\frac{dz}{d\tau}\frac{dz'}{d\tau'}\right)^2\ge c^2c'^2\end{equation} (7) \begin{equation}c^2 \frac{dt}{d\tau}\frac{dt'}{d\tau'}-\frac{dx}{d\tau}\frac{dx'}{d\tau'}-\frac{dy}{d\tau}\frac{dy'}{d\tau'}-\frac{dz}{d\tau}\frac{dz'}{d\tau'}\ge cc'\end{equation} \begin{equation}c^2 \frac{dt}{d\tau}\frac{dt'}{d\tau'}-\frac{dx}{d\tau}\frac{dx'}{d\tau'}-\frac{dy}{d\tau}\frac{dy'}{d\tau'}-\frac{dz}{d\tau}\frac{dz'}{d\tau'}\le -cc'\end{equation} v.v'>=cc' or v.v'<=-cc' But v.v'=c^2. Therefore the solution is c'=c .We have v' is a proper velocity.||But we assumed /postulated at the very outset that v' is not a proper velocity. [One may require to refresh the page for proper viewing] Norm of Four Acceleration Four Acceleration \begin{equation}\left(c\frac{d^2 t}{d\tau^2},\frac{d^2 x}{d\tau^2},\frac{d^2 y}{d\tau^2}, \frac{d^2 z}{d\tau^2}\right)\end{equation} (1) \begin{equation}c^2N=c^2\left(\frac{d^2 t}{d\tau^2}\right)^2-\left(\frac{d^2 x}{d\tau^2}\right)^2-\left(\frac{d^2 y}{d\tau^2}\right)^2-\left( \frac{d^2 z}{d\tau^2}\right)^2\end{equation} (2) We consider the metric \begin{equation}c^2d\tau^2=c^2dt^2-dx^2-dy^2-dz^2 \end{equation} (4) \begin{equation}\Rightarrow c^2=c^2\left(\frac{dt}{d\tau}\right)^2-\left(\frac{dx}{d\tau}\right)^2- \left(\frac{dy}{d\tau}\right)^2-\left(\frac{dz}{d\tau}\right)^2 \end{equation} (5) Differentiating (5) with respect to proper time we have, \[ c^2\frac{dt}{d\tau}\frac{d^2 t}{d \tau^2}- \frac{dx}{d\tau}\frac{d^2 x}{d \tau^2}-\frac{dy}{d\tau}\frac{d^2 y}{d \tau^2}-\frac{dz}{d\tau}\frac{d^2 z}{d \tau^2}=0\] (6) By applying the Cauchy Schwarz inequality we have, \[\left(\frac{dx}{d\tau}\frac{d^2 x}{d \tau^2}+\frac{d y}{d\tau}\frac{d^2y}{d \tau^2}+\frac{dz}{d\tau}\frac{d^2 z}{d \tau^2}\right)^2 \\ \ge \left(\left(\frac{d x}{d\tau}\right)^2+\left(\frac{dy}{d\tau}\right)^2+\left(\frac{dz}{d\tau} \right)^2\right)\left(\left(\frac{d^2 x}{d \tau^2}\right)^2+\left(\frac{d^2 y}{d \tau^2}\right)^2+\left(\frac{d^2 z}{d \tau^2}\right)^2\right)\](7) \[\left(c^2\left(\frac{d t}{d \tau}\right)^2-c^2\right)\left(c^2\left( \frac {d^2 t}{d \tau^2}\right)^2-c^2N\right) \ge\left( c^2 \frac {d^2 t}{d\tau^2}\right)^2\left(c^2\frac{d t}{d\tau}\right)^2\] \[\left(\left(\frac{dt}{d \tau}\right)^2-1\right)\left(\left( \frac {d^2 t}{d \tau^2}\right)^2-N\right) \ge \left( \frac {d^2 t}{d\tau^2}\right)^2\left(\frac{dt}{d\tau}\right)^2\] (8) \[ \left( \frac {d^2 t}{d\tau^2}\right)^2\left(\frac{dt}{d\tau}\right)^2-N\left(\frac{dt}{d \tau}\right)^2-\left( \frac {d^2 t}{d \tau^2}\right)^2+N\ge \left( \frac {d^2 t}{d\tau^2}\right)^2\left(\frac{dt}{d\tau}\right)^2\] \[ \Rightarrow-N\left(\frac{dt}{d \tau}\right)^2-\left( \frac {d^2 t}{d \tau^2}\right)^2+N\ge 0\] (9) \[N\left(1-\left(\frac{dt}{d\tau}\right)^2\right)\ge \left( \frac {d^2 t}{d \tau^2}\right)^2\] \[N\left(1-\gamma^2\right)\ge \left( \frac {d^2 t}{d \tau^2}\right)^2\](10) The right side of (10) is always positive or zero. Therefore the left side is also positive or zero. Therefore N<=0 since gamma[Lorentz factor] is positive[>= unity]. N cannot be positive unless the particle is moving uniformly. If N is negative then from (1) we have \[c^2\left(\frac{d^2 t}{d\tau^2}\right)^2\le \left(\frac{d^2 x}{d\tau^2}\right)^2-\left(\frac{d^2 y}{d\tau^2}\right)^2-\left( \frac{d^2 z}{d\tau^2}\right)^2\] For a particle at rest (spatially) and N<0, \[\left(\frac{d^2 t}{d\tau^2}\right)^2\le 0\](11) Equation (11) will not hold, the left side being a [perfect square and hence positive or zero]unless \begin{equation}\frac{d^2 t}{d\tau^2}=0\end{equation} that is unless \begin{equation}\frac{dt}{d\tau}=constant \Rightarrow \gamma=constant\end{equation} that is unless the particle is moving with a constant velocity. An accelerating particle will not cater to N<0. For N=0 we have from (10) \[\left(\frac{d^2x}{d\tau^2}\right)^2\le 0 \](12) Equation (12) is not a valid on unless the particle moves with a constant velocity. We conclude that the norm square of the acceleration vector c^2N cannot be positive, negative or zero unless the particle is moving uniformly that is unless it moves with a constant velocity Anamitra Palit, Jan 16, 2021 at 12:29 PM
CommonCrawl
www.springer.com The European Mathematical Society Pages A-Z StatProb Collection Project talk Average order of an arithmetic function From Encyclopedia of Mathematics Revision as of 06:16, 10 September 2016 by Boris Tsirelson (talk | contribs) (link) 2010 Mathematics Subject Classification: Primary: 11A25 [MSN][ZBL] Some simpler or better-understood function which takes the same values "on average" as an arithmetic function. Let $f$, $g$ be functions on the natural numbers. We say that $f$ has average order $g$ if the asymptotic equality $$ \sum_{n \le x} f(n) \sim \sum_{n \le x} g(n) $$ holds as $x$ tends to infinity. It is conventional to assume that the approximating function $g$ is continuous and monotone. The average order of $d(n)$, the number of divisors of $n$, is $\log n$; The average order of $\sigma(n)$, the sum of divisors of $n$, is $ \frac{\pi^2}{6} n$; The average order of $\phi(n)$, the Euler totient function of $n$, is $ \frac{6}{\pi^2} n$; The average order of $r(n)$, the number of ways of expressing $n$ as a sum of two squares, is $\pi$; The Prime Number Theorem is equivalent to the statement that the von Mangoldt function $\Lambda(n)$ has average order 1. Asymptotics of arithmetic functions Normal order of an arithmetic function G.H. Hardy; E.M. Wright (2008). An Introduction to the Theory of Numbers (6th ed.). Oxford University Press. ISBN 0-19-921986-5 Gérald Tenenbaum (1995). Introduction to Analytic and Probabilistic Number Theory. Cambridge studies in advanced mathematics 46. Cambridge University Press. ISBN 0-521-41261-7 How to Cite This Entry: Average order of an arithmetic function. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Average_order_of_an_arithmetic_function&oldid=39078 Retrieved from "https://www.encyclopediaofmath.org/index.php?title=Average_order_of_an_arithmetic_function&oldid=39078" TeX done About Encyclopedia of Mathematics Impressum-Legal
CommonCrawl
Home Journals RIA Efficient Computation for Localization and Navigation System for a Differential Drive Mobile Robot in Indoor and Outdoor Environments Efficient Computation for Localization and Navigation System for a Differential Drive Mobile Robot in Indoor and Outdoor Environments Selvaraj Karupusamy | Sundaram Maruthachalam | Suresh Mayilswamy | Shubham Sharma | Jujhar Singh | Giulio Lorenzini* Department of Robotics and Automation Engineering, PSG College of Technology, Coimbatore 641004, Tamil Nadu, India Department of Electrical and Electronics Engineering, PSG College of Technology, Coimbatore 641004, Tamil Nadu, India Dept. of Mechanical Engineering, IK Gujral Punjab Technical University, Kapurthala 144603, India Department of Engineering and Architecture, University of Parma, Parco Area delle Scienze 181/A, Parma 43124, Italy [email protected] Numerous challenges are usually faced during the design and development of an autonomous mobile robot. Path planning and navigation are two significant areas in the control of autonomous mobile robots. The computation of odometry plays a major role in developing navigation systems. This research aims to develop an effective method for the computation of odometry using low-cost sensors, in the differential drive mobile robot. The controller acquires the localization of the robot and guides the path to reach the required target position using the calculated odometry and its created new two-dimensional mapping. The proposed method enables the determination of the global position of the robot through odometry calibration within the indoor and outdoor environment using Graphical Simulation software. odometry, navigation, mapping, localization, range finders, simulation Autonomous mobile robots are required for supporting humans in nonliving areas such as underwater and space research. They are useful for research activities such as monitoring, cleaning, searching, and inspection. The mobile robots stand for the evolution of these as they can move freely in a dynamic environment. The propagation of mobile robots is subjected to handling two major problems, which are: To find where the robot is at a particular instance (localization). Path planning with obstacle avoidance (navigation system). The proposed method is used to enhance the capability of localization, which depends on odometry and improves the navigation system. This makes the path planning to become more flexible and easily pertinent. To deal with the localization problem, first, we need to apply an odometry calibration method. To that method, we need to integrate odometry errors right through a path traveled by the robot and bring into new improved odometry parameters. This method is an endpoint method with diverse initial and final points, which do not require additional sensors. Robots sense and transfer the information to the remote station by wireless communication systems. An important advantage of the method is that it is compatible with any odometry model. In this work, the development of a differential drive mobile robot for autonomous navigation system is described. Figure 1(a) shows the top view of the differential drive mobile robot and Figure 1(b) shows the bottom view of the differential drive mobile robot. The global position of the robot is mapped using the odometry data from the robot encoders and the robot can effectively avoid obstacles. Local reconstruction of the robot's position and orientation is made possible with an accurate odometry calibration from the encoder's data [1]. The infrared (IR) range finder is a low-cost, lightweight, and low-power-consumption device compared to a laser range finder for an indoor environment. It deals with two-dimensional (2D) mapping and localization. This is built by a 2D grid map in local coordinate, using data from the IR range finder. It is integrated into the global coordinate of a mobile robot using information from IR landmarks. A Wi-Fi (wireless networking) transmitter is used to transmit the odometry and local reconstruction information. It provides a consistent global map that can be used in the autonomous navigation of mobile robots in remote locations [2, 3]. The experimental result under indoor environment is obtained by Virtual Simulation software. Figure 1. Differential drive mobile robot: (a). Top view; (b) Bottom view 2. Architecture of the Differential Drive Mobile Robot The designed differential drive robot has several components, including controller, sensors, motors, and actuators. The controller is the brain of the robot that helps to acquire the values from the sensors. and transmits the acquired values to a remote station through Wi-Fi. The architecture of the differential drive mobile robot is shown in Figure 2. In the environment, the developed algorithm plots a 2D map of the path traveled by the robot based on the collected information. The designed robot has two types of sensors to identify the environmental state. The first one is for finding the distance of the obstacle placed in front of the robot. Another one is to estimate the robot position. The robot algorithm has been simultaneously computing the sensors data as localization and mapping [1, 4 ,5]. The current position (x, y, ɵ) value will displayed through the LCD (liquid crystal display) that is placed on the robot itself. A sequential flowchart for system design is shown in Figure 3. Figure 2. The architecture of the differential drive mobile robot Figure 3. Sequential flowchart for system design 3. Differential Drive Wheel Mobile Robot 3.1 Differential drive wheel The two-wheeled differential drive robot is driven by two independently operated wheels placed on both sides of the robot body. The robot will go in a straight line when both the wheels are driven in the same direction and speed. When both wheels are turned with equal speed in opposite directions, the robot will rotate on the central point of the axis [6]. Figure 4 shows the direction of the robot when moving towards left direction. As the direction of the robot depends on the direction of rotation of the two driven wheels, this quantity of encoder pulse should be sensed and controlled precisely [3, 7]. Figure 4. Differential drive wheel position where, $D_{\mathrm{W}}$ is the distance between the two wheels (mm). The robot was driven on a surface of 5 m × 7.5 m in an indoor environment. A consistently smooth cement concrete floor was used as the landscape that normally reduced the chance of nonsystematic errors such as wheel slippage. The overall weight of the robot was 1.5 kg and the distance between the two wheels was 180 mm. The maximum speed was up to 300 mm/s. Encoders were used to calculate the linear displacement of each wheel. The new orientation of the robot could be estimated from the difference in the encoder counts, the diameter of the wheels, and distance between the wheels [1-5, 8-13]. The encoder values were transferred to the remote station system through the wireless data transmission module to plot a 2D map in the Virtual Simulation software. 3.2 Odometry computation method The odometry uses the data from motion sensors, to approximate the changes in the position relative to the starting point of the robot over time. This computation method calculates the position of the robot (X, Y, Ө) (X and Y are the coordinates of the ground plane and Ө is the direction of the robot). As this method is implemented in the differential drive mobile robot, the mapping calculation according to its movement is quite complex. The proposed method is used to simplify the complexity of the existing method by acquiring the current position of both (right and left) wheels in graphical display at every millisecond. In that way, the Simulation software algorithm can monitor the updated values of (x, y, Ө) at a time; 't' is updated with the previous positional value (x, y, Ө). The displacement of the robot is based on its size and it is calculated by d = 2πr, where r is the radius of the wheel. Analysis of the robot motion is shown in Figure 5. Let the starting point of the robot be zero. In this method, the developed algorithm calculates the displacement of the wheel using pulses obtained from the encoders, placed in the wheel [3]. Let the number of counting pulses be n, then the wheel displacement is given by, $d=\frac{n 2 \pi r}{\text { Total number of pulses per one revolution }}$ (1) d – wheel displacement (mm) r – radius of the wheel (mm) n – counted pulse Figure 5. Differential drive robot moving from the starting point The robot covers distances at a time t. The arc length distance from the center point is Dc, which is calculated from the below equation, $D_{\mathrm{C}}=\frac{D_{\mathrm{L}}+D_{\mathrm{R}}}{2}$ (2) $D_{\mathrm{L}}$ – travel distance of the left wheel (in mm) $D_{\mathrm{R}}$ – travel distance of the right wheel (in mm) The direction of the robot depends on $\theta$ degree, to be rotated. The moving distance of the left wheel is less than that of the right wheel. So, the robot moves toward the left side or the first quadrant. The arc length of each wheel is different in radius but common concerning the center point, by considering wheel radius as $\mathrm{L}_{r} \text { and } \mathrm{R}_{r}$. Figures 5 and 6 show the representation of the wheel movement. These values are calculated from the basic geometry functions. The angles and $\theta$ are equal. $\alpha=\frac{D_{\mathrm{L}}-D_{\mathrm{R}}}{D_{\mathrm{w}}}$ (3) α is the angle difference in radians from the starting point, as shown in Figure 5. $R_{\mathrm{L}}=\frac{D_{\mathrm{L}}}{\alpha}$ (4) From Eq. (4), $\alpha R_{\mathrm{L}}=D_{\mathrm{L}}$ (5) $R_{\mathrm{r}}=\frac{D_{\mathrm{R}}}{\alpha}$ (6) $\alpha R_{\mathrm{r}}=D_{\mathrm{R}}$ (7) $\frac{\left(\mathrm{R}_{\mathrm{L}}+\mathrm{R}_{\mathrm{r}}\right)}{2}=R_{\mathrm{C}}$ (8) $R_{\mathrm{C}}$ is the radius of the center point of the robot wheels. From Eqns. (5) and (7), $\begin{aligned} &\alpha R_{\mathrm{L}}+\alpha R_{\mathrm{r}}=D_{\mathrm{L}}+D_{\mathrm{R}} \\ &\alpha\left(R_{\mathrm{L}}+R_{\mathrm{r}}\right)=D_{\mathrm{L}}+D_{\mathrm{R}} \end{aligned}$ (9) $\left(R_{\mathrm{L}}+R_{\mathrm{r}}\right)=2 R_{\mathrm{C}}$ (10) Substituting (10) in (9), $2 \alpha R_{\mathrm{C}}=\mathrm{D}_{\mathrm{L}}+\mathrm{D}_{\mathrm{R}}$ (11) $R_{\mathrm{C}}=\frac{D_{\mathrm{L}}+D_{\mathrm{R}}}{2 \alpha}$ (12) Let the starting positions of the robot be X and Y, and they are denoted by equations as $X=x+R_{\mathrm{C}} \cos \theta$ (13) $Y=y+R_{\mathrm{C}} \sin \theta$ (14) X, Y, $\theta$ are current positions of the robot at every millisecond. $\mathrm{X}_{\text {new }}, Y_{\text {new, }}, \theta_{\text {new }}$ are global positions of the robot that can be calculated from these equations. This is a continuous process during the entire task. So, the computation of the new position and the current robot position is done [13]. Figure 6 depicts the representation of position at each moment. 3.3 Navigation system Let the starting position of the robot be S1, at the time t1 and let the next position be S2, at the time t2. At the time t1, the robot moves toward the left side. But the moving distance of the left wheel is found to be higher than that of the right wheel at time t2. So, the robot starts to move toward the right side from point S2. While the robot moves in the right side, the center point of the arc is placed on the right side, as shown in Figure 6. $D_{\mathrm{L}}, D_{\mathrm{R}}, \text { and } D_{\mathrm{C}}$ are calculated from the robot wheel movements. This will be helpful to find the radius of the arc $R_{\mathrm{L}}, R_{\mathrm{C}} \text {, and } R_{\mathrm{r}}$ and thereby the new position of the robot $\left(X_{\text {new }}, Y_{\text {new }}, \theta_{\text {new }}\right)$ will be calculated. The latest position values are replaced with the previous position values and the software algorithm performs the calculation and updates the current path of the robot [9]. A new angle of the robot can be calculated using Eq. (10), $\theta_{\mathrm{N}}=\alpha \times 180 / \pi$ (15) Calculation of the actual position of the robot is given below: X=\left[X_{\text {new }}\right]+R_{\mathrm{C}} \cos \left(\theta_{\mathrm{N}}+\theta\right) \\ X=\left[x+R_{\mathrm{C}} \cos \theta\right]+R_{\mathrm{C}} \cos \left(\theta_{\mathrm{N}}+\theta\right) \\ X=x+R_{\mathrm{C}}\left[\cos \theta+\cos \left(\theta_{\mathrm{N}}+\theta\right)\right] \\ Y=\left[Y_{\text {new }}\right]+R_{\mathrm{C}} \sin \left(\theta_{\mathrm{N}}+\theta\right) \end{gathered}$ (16) Figure 6. Odometry calculation of the robot at instant time Y=\left[y+R_{\mathrm{C}} \sin \theta\right]+R_{\mathrm{C}} \sin \left(\theta_{\mathrm{N}}+\theta\right) \\ Y=y+R_{C}\left[\sin \theta+\sin \left(\theta_{\mathrm{N}}+\theta\right)\right] $\theta_{\text {new }}=\theta+\theta_{\mathrm{N}}$ (18) The above is a simplified odometry function for differential drive mobile robot. 4. Experimental Analyses The differential drive robot performs autonomous movement by acquiring robot position and orientation with the help of accurate odometry calculation. The calculation is done from the starting position of the robot with the help of the encoder's readings, from which the global position of the robot within the environment can be determined [14, 15]. Figure 7. Differential drive mobile robot with an IR range finder and ultrasonic sensor $\text { Distance }=\frac{\text { Speed of sound } \times \text { Time taken }}{2}$ Ultrasonic sensors can measure distance. The device emits an ultrasonic wave and receives the wave reflected from the object. Distance measurement can be achieved by measuring the time between the emission and reception. Here, the developed controller algorithm controls the movement of wheels. Time-calculated odometry values help in identifying the robot path and direction by integrating the value into the global coordinate of the mobile robot. The IR range finder works by the process of triangulation. Figure 7 shows that the experimental test is differential drive mobile robot with an IR range finder and ultrasonic sensor. A pulse of light is emitted and then reflected back. When the light returns, it comes back at an angle that is dependent on the distance of the reflecting object. Triangulation works by detecting this reflected beam angle. By knowing the angle, distance can be determined [12]. 4.1 Localization and mapping Robotic mapping is a field related to computer vision and capturing environmental data. The IR range finder is used to map the environment. The robot needs at least one range finder and one nonmodified servo. When the ultrasonic sensor detects any obstacle, the servo starts to rotate to a particular degree and record the coordinate values [2, 12]. Then again servo moves to the next available angle and notes the same. The system performs the process of monitoring environment and collecting the point cloud data continuously. Figure 8 shows the servo mapping system (servo motor with IR range finder) can achieve in front of 180° rotation. It has an array of coordinates for the corresponding angle. After the first cycle, the servo system must rotate back in the reverse direction and record the values. This technique helps to perform the mapping process. The following figures shows that the experimental test is performed in a single room by placing some obstacles in different places [10]. Figure 8. Mapping with the IR range finder When the robot is powered on in the indoor environment, the previously measured robot position and the global position are interconnected. The computed position is saved as a relative position. Robots' movement toward the captured data and it will be compared with already saved mapped data for localization [16, 17]. Figure 9. Scanning of objects at position P1 Figure 10. Scanning of objects at position P2 Figure 11. Mapped data at P1 In case of its matching with the data in its localization position, the orientation of the robot position automatically gets data from the global position. Figures 9 and 10 show the robot position scanning of the room with two objects. Figures 11 and 12 show the structure of the scanning data at different positions of the robot. Finally, both mapped images are the same. This can be synchronized with the robot's stored mapping data for localization process. Localization is helpful for the robot to make a decision and achieve target position along with identifying the shortest path without human knowledge. The user can provide the whole environment structure to the robot. The constructed data can contain wall position, angles, clutter shape, and its position data [12, 18]. Every individual room structure can be saved in different segments. Low-resolution sensors can make more noise in this mapping structure. Noise is produced during comparison errors in the localization process. This can be reduced by the use of a noise-reducing algorithm in the localization of clutter. 4.2 Experiment in an indoor environment The robot starts to move from the home position autonomously and also navigates the path to avoid the obstacle in the room. At the same time, the main controller simultaneously calculates the robot position (x, y, $\theta$) and displays the same in LCD, and in addition, transmit it to the computer in the remote location [19]. The simulation program starts to plot the values as a 2D graph to trace the robot path [20]. The robot starting position is considered as the origin of the plotting graph. The robot scanning process starts when it meets an obstacle in the moving path. Otherwise, it will be moving toward the target. These works enable to study and analyze the errors of the odometry computation and thereby calculate its efficiency. Figure 13 shows the robot's moving direction while avoiding the obstacle in its path. Figure 13. Robot moving path with indoor structure 4.3 Efficient odometry computation The experimental test is conducted at a different speed, different cyclic time for calculation in a room. In this test, an examination regarding the difficulties in finding the exact position and path of the robot is performed. Figure 14 represents the accuracy of function concerning time. Computation speed and accuracy are depending on the number of mathematical functions in the algorithm. In this odometry, only the minimum number of multiplication and additions are used. This algorithm does not have integral, differentiation, and matrix functions. So, it takes the least time for the computing process. Other major odometry parameters are given below. 4.3.1 Wheel friction Friction of the wheel affects the position accuracy. Sometimes the rotation of wheels could slip by the smooth surface. But the encoder gives out pulses while the robot not moving. This may give out wrong positional value during the odometry. This problem can be avoided by using high friction tires or using wheels with better width. 4.3.2 Computing speed Computation of the odometry should take maximum slot time. When time increases, the difference between every computation is ignored; thereby, the positional changes get differed. These mainly affect the calculation of direction changes. 4.3.3 Number of pulses per revolution Encoder output is directly related to the accuracy. The number of pulses is directly proportional to the resolution. 4.3.4 Distance between the two wheels. Distance between the two wheels is directly proportional to the accuracy. This parameter can efficiently measure minimum level turning difference. 4.3.5 Diameter of the wheel Positional accuracy depends on the diameter of the wheel. Its accuracy is inversely proportional to the size of the wheel. This work explained the experimental result of the robot motion computing speed and accuracy in odometry computation. The graph is floating between the robot motion time increments and the accuracy level of the position computation. The accuracy level percentage can be calculated from the traveled distance and deviation of the actual position at unit time. \text { Position Accuracy } \\ =\frac{\text { Displaced position - Target position }}{\text { Total Displacement }} \times 100 \% \end{gathered}$ This experiment can check the accuracy level at different computation speeds. Figure 14 shows various levels of accuracy explaining different computation periods. Figure 14. Experimental graph plotted for various computing speeds 4.4 Odometry experimental analysis Figure 15. Block diagram for plotting a robot path on a virtual screen The computation of the robot is at maximum accuracy at the starting point. The position of the robot at the starting point is accurate and it is calculated from that point [21]. But the position accuracy gets degraded when the time and position increase. So, the position error is rapidly increased. The maximum error occurs when the direction of the robot gets changed. Robot position accuracy variation is calculated for every period. The position accuracy can be calculated as, $Odometry position accuray =\frac{\text { Robot actual position sination point }}{\text { Tobot traveling distance }} Traveling distance Ds = $D_{\mathrm{s}}=\sqrt{\left(x_{t}-x_{\text {start }}\right)^{2}+\left(y_{t}-y_{\text {start }}\right)^{2}}$ $x_{t}$ is the robot displacement on x-axis at time t. $\chi_{\text {start }}$ is the starting position of the robot on x-axis. $y_{t}$ is the robot displacement on y-axis at time t. $y_{\text {start }}$ is the starting position of the robot on y-axis. Calculated position data can be wirelessly sent to the remote station and it has several modules that segment the received data from the robot. The remote station monitoring system has virtual simulation software that can process the position and mapped data. Figure 15 shows the block diagrams of the software programming for calculating the robot path. This computation method is very useful for error analysis, which is done by calibration. 5. Results and Discussions On the basis of the signal arriving from robots and it will be chosen through logic and executed. Figure 16. (a). Front panel view for 2D scanning and mapping; (b) Front panel view of the path traveled by robot Computed servo motor and range finder information are used to find the 2D coordinates. Every point indicates the edge of the obstacle. The image processing can merge every edge of the obstacle, which is useful to identify the exact shape of the entire room or environment [2]. The number of obstacles at the robot path or indoor, which makes the success of the accurate mapping, can be slightly decreased [22]. Figure 16(a) and 16(b) show the graphical representation of the indoor mapping and robot path, respectively. This is one robot data that can extend into multiple robots simultaneously doing mapping with different locations data. All robot data are incorporated with the global position in the master computer [23]. This approach is one of the advanced technologies for localization and mapping which is used for automated guided vehicle development applications [2, 11]. A network can be formed easily by an access point. In this case, a computer equipped with a wireless communication adapter is used. A software algorithm is developed to receive incoming data from the robot controller and perform the 2D simulation mapping [24]. Graphical representation of the software program is shown in Figure 17. Figure 17. Graphical program for acquiring data from the mobile robot Figure 18. (a) Single robot can perform mapping of the entire building; b). Multiple robots can simultaneously perform mapping at every room Every robot has a separate address that it uses for localization identification and updating the 2D mapping of unknown area. Figure 18(a) shows the entire building mapping via a single robot. More robots can be used to survey the whole building with localization data. Figure 18(b) shows multiple robots simultaneously mapping every room. It reduces the mapping time to a single shot. Each robot has individual localizing and all data can be merged at the remote location for mapping the entire indoor environment. Figure 18(b) shows the front panel view of the experimental setup mapping data. The efficient odometry method for the autonomous path planning stated in this research allows global optimal path planning, mapping, and localization in the presence of static obstacles of any shape. Local reconstruction of the robot's position and orientation is made possible with an accurate calibration starting from the low-cost equipment. These improvements can easily be integrated into the global coordinate of a mobile robot using information from 2D and 3D mapping landmarks. It provides a consistent global map that can be used in the autonomous navigation of mobile robots. This research work used hardware implementation and odometry algorithm development. The experiments conducted showed good results, which suggests it an efficient odometry algorithm for increasing computing speed and position accuracy. [1] Giovannangeli, C., Gaussier, P. (2010). Interactive teaching for vision-based mobile robots: A sensory-motor approach. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 40(1): 13-28. https://doi.org/10.1109/TSMCA.2009.2033029 [2] Toda, Y., Kubota, N. (2013). Self-localization based on multiresolution map for remote control of multiple mobile robots. IEEE Transactions on Industrial Informatics, 9(3): 1772-1781. https://doi.org/10.1109/TII.2013.2261306 [3] Suh, J., You, S., Choi, S., Oh, S. (2016). Vision-based coordinated localization for mobile sensor networks. IEEE Transactions on Automation Science and Engineering, 13(2): 611-620. https://doi.org/10.1109/TASE.2014.2362933 [4] Censi, A., Franchi, F., Marchionni, L., Oriolo, G. (2013). Simultaneous calibration of odometry and sensor parameters for mobile robots. IEEE Transactions on Robotics, 29(2): 475-492. https://doi.org/10.1109/TRO.2012.2226380 [5] Lv, W., Kang, Y., Qin, J. (2019). Indoor localization for skid-steering mobile robot by fusing encoder, gyroscope, and magnetometer. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 49(6): 1241-1253. https://doi.org/10.1109/TSMC.2017.2701353 [6] Martinez-Melchor, J.A., Jimenez-Fernandez, V.M., Vazquez-Leal, H., Filobello-Nino, U.A. (2018). Optimization of collision-free paths in a differential-drive robot by a smoothing piecewise-linear approach. Comp. Appl. Math., 37: 4944-4965. https://doi.org/10.1007/s40314-018-0602-x [7] Antonelli, G., Chiaverini, S. (2007) Linear estimation of the physical odometric parameters for differential-drive mobile robots. Auton Robot, 23: 59-68. https://doi.org/10.1007/s10514-007-9030-2 [8] Barfoot, T.D., Clark, C.M. (2003). Motion planning for formations of mobile robots. Robotics and Autonomous Systems, 46(2): 65-78. https://doi.org/10.1016/j.robot.2003.11.004 [9] Choi, H., Yang, K.W., Kim, E. (2014). Simultaneous global localization and mapping. In IEEE/ASME Transactions on Mechatronics, 19(4): 1160-1170. [10] Han, S., Kim, J., Myung, H. (2013). Landmark-based particle localization algorithm for mobile robots with a fish-eye vision system. IEEE/ASME Transactions on Mechatronics, 18(6): 1745-1756. https://doi.org/10.1109/TMECH.2012.2213263 [11] Son, J., Ahn, H. (2015). Formation coordination for the propagation of a group of mobile agents via self-mobile localization. IEEE Systems Journal, 9(4): 1285-1298. https://doi.org/10.1109/JSYST.2014.2326795 [12] Kim, J., Chung, W. (2016). Localization of a mobile robot using a laser range finder in a glass-walled environment. IEEE Transactions on Industrial Electronics, 63(6): 3616-3627. https://doi.org/10.1109/TIE.2016.2523460 [13] Barfoot, T.D., Clark, C.M., Rocks, S.M., D Eleuterio, G.M.T. (2002). Kinematic path-planning for formations of mobile robots with a nonholonomic constraint. Proceedings of the 2002 IEEE/RSJ International conference on Intelligent Robots and Systems, 3: 2819-2824. https://doi.org/10.1109/IRDS.2002.1041697 [14] Park, S., Roh, K.S. (2016). Coarse-to-fine localization for a mobile robot based on place learning with a 2-D range scan. IEEE Transactions on Robotics, 32(3):528-544. https://doi.org/10.1109/TRO.2016.2544301 [15] Meng, Q.H., Bischoff, R. (2004). Odometry based pose determination and errors measurement for a mobile robot with two steerable drive wheels. Journal of Intelligent and Robotic Systems, 41: 263-282. https://doi.org/10.1007/s10846-005-3506-0 [16] Choi, B.S., Lee, J.W., Lee, J.J., Park, K.T. (2011). A hierarchical algorithm for indoor mobile robot localization using RFID sensor fusion. IEEE Transactions on Industrial Electronics, 58(6): 2226-2235. https://doi.org/10.1109/TIE.2011.2109330 [17] Miah, M.S., Knoll, J., Hevrdejs, K. (2018). Intelligent range-only mapping and navigation for mobile robots. IEEE Transactions on Industrial Informatics, 14(3): 1164-1174. https://doi.org/10.1109/TII.2017.2780247 [18] Eberst, C., Andersson, M., Christensen, H.I. (2000). Vision- based door-traversal for autonomous mobile robots. Proceeding of the 2000 IEEE/RSJ International Conferences on Intelligent Robots and Systems, pp. 620-625. https://doi.org/10.1109/IROS.2000.894673 [19] Blanco, J.L., Fernández-Madrigal, J.A., Gonzalez, J.. (2008). A novel measure of uncertainty for mobile robot SLAM with Rao–Blackwellized particle filters. The International Journal of Robotics Research, 27(1): 73-89. https://doi.org/10.1177/0278364907082610 [20] Liu, M. (2016). Robotic online path planning on point cloud. IEEE Transactions on Cybernetics, 46(5): 1217-1228. https://doi.org/10.1109/TCYB.2015.2430526 [21] Chwa, D. (2016). Robust distance-based tracking control of wheeled mobile robots using vision sensors in the presence of kinematic disturbances. IEEE Transactions on Industrial Electronics, 63(10): 6172-6183. https://doi.org/10.1109/TIE.2016.2590378 [22] Cheon, H., Kim, B.K. (2019). Online bidirectional trajectory planning for mobile robots in state-time space. IEEE Transactions on Industrial Electronics, 66(6): 4555-4565. https://doi.org/10.1109/TIE.2018.2866039 https://doi.org/10.1109/TMECH.2013.2274822 [23] Lee, G., Chong, N.Y., Christensen, H. (2010). Tracking multiple moving targets with swarms of mobile robots. Intel Serv Robotics 3: 61-72. https://doi.org/10.1007/s11370-010-0059-2 [24] Chen, H., Sun, D., Yang, J., Chen, J. (2010). Localization for multirobot formations in indoor environment. IEEE/ASME Transactions on Mechatronics, 15(4): 561-574. https://doi.org/10.1109/TMECH.2009.2030584
CommonCrawl
What is the diffuse ionized gas? I've been trying to find a clean definition what people mean by when they talk about diffuse ionized gas in the interstellar medium, but I couldn't find anything so far. Apparently it's supposed to be trivial. Is it simply ionized gas, mixed with non-ionized gas in the ISM through the process of diffusion? interstellar-medium gas mivkovmivkov $\begingroup$ PLease post links to examples of this usage. $\endgroup$ $\begingroup$ It's not mixed via diffusion, it is diffuse, as in rarefied, as in the opposite of dense. $\endgroup$ – FJC $\begingroup$ @FJC that cleared it up! Didn't know that was a synonym for rarefied. $\endgroup$ – mivkov The "diffuse ionized gas" (DIG) is another term for the phase of the interstellar medium (ISM) usually called the warm ionized medium (WIM). With a temperature of the order $10^4\,\mathrm{K}$, but extenting to lower and higher temperatures, it is hot enough to keep hydrogen ionized, and various metals exist as low-ionization species, such as S II, N II, and O II, and even (weak) O III (e.g. Hill et al. 2012; Zang et al. 2017; Weilbacher et al. 2018). Rough pressure equilibrium with the other phases of the ISM results in characteristic densities of $0.1\,\mathrm{cm}^{-1}$, but typical ranges are an order of magnitude to both sides. A review of the DIG/WNM can be found in Haffner et al. (2009). I don't think it means anything special. Take, for example, the claim at Wikipedia An interstellar cloud is generally an accumulation of gas, plasma, and dust in our and other galaxies. Put differently, an interstellar cloud is a denser-than-average region of the interstellar medium, (ISM), the matter and radiation that exists in the space between the star systems in a galaxy. Depending on the density, size, and temperature of a given cloud, its hydrogen can be neutral, making an H I region; ionized, or plasma making it an H II region; or molecular, which are referred to simply as molecular clouds, or sometimetimes dense clouds. Neutral and ionized clouds are sometimes also called diffuse clouds. An interstellar cloud is formed by the gas and dust particles from a red giant in its later life. As well as another statement In all phases, the interstellar medium is extremely tenuous by terrestrial standards. In cool, dense regions of the ISM, matter is primarily in molecular form, and reaches number densities of $10^6$ molecules per $cm^3$ (1 million molecules per cm3). In hot, diffuse regions of the ISM, matter is primarily ionized, and the density may be as low as $10^{−4}$ ions per $cm^3$. Compare this with a number density of roughly $10^{19}$ molecules per cm3 for air at sea level, and $10^{10}$ molecules per $cm^3$ (10 billion molecules per $cm^3$) for a laboratory high-vacuum chamber. $\begingroup$ Carl, you need to fix the numbers - e.g. 10^19 rather than 1019. $\endgroup$ – Chappo Hasn't Forgotten Monica $\begingroup$ @Chappo thanks - I didn't pay attention when I cut/pasted $\endgroup$ Not the answer you're looking for? Browse other questions tagged interstellar-medium gas or ask your own question. What is the difference between gas and dust in astronomy? Gas halo of our Milky Way Galaxy What is the equation of state for a relativistic fluid/gas? What would the pressure and temperature of gas be, right above Jupiter's gas/liquid boundary? How is interstellar gas density mapped from GAIA data? What are the so called clouds of dust and gas made of? What does it mean for cold clouds to be in pressure equilibrium with a diffuse hot medium?
CommonCrawl
How does Wikipedia's definition of the Lebesgue integral relate to more common definitions? Wikipedia presents a definition of the Lebesgue integral (of a nonnegative function $f$) that I hadn't encountered before: Let $f^*(t)=\mu \left (\{x\mid f(x)>t\} \right )$. The Lebesgue integral of $f$ is then defined by $\int f\,d\mu = \int_0^\infty f^*(t)\,dt$ where the integral on the right is an ordinary improper Riemann integral My question is, what is the relation between this definition and more standard definitions, like the supremum of $\int \phi\,d\mu$ over simple functions $\phi$ such that $0 \leq \phi \leq f$? I'm also interested in understanding the intuitive justification provided for this definition: Using the "partitioning the range of $f$" philosophy, the integral of $f$ should be the sum over $t$ of the area of the thin horizontal strip between $y = t$ and $y = t + dt$. This area is just $\mu \left (\{x\mid f(x)>t\} \right ) \,dt$. I don't see why that's the area of the infinitesimal strip. Clearly the width of the strip is $dt$, but why is the length of the strip $\mu \left (\{x\mid f(x)>t\} \right ) \,$? EDIT: It seems to me that the same reasoning that shows that the Lebesgue integral of $f$ can be expressed as $\int_0^\infty f^*(t)\,dt$ can also be used to express $f$ itself as $f(x) = \int_0^\infty f^{**}(x,t)\,dt$, where $f^{**}(x,t) = \chi_{\{s:f(s)>t\}}(x)$. Am I right about that? real-analysis measure-theory lebesgue-integral lebesgue-measure Keshav SrinivasanKeshav Srinivasan $\begingroup$ Tell me if I'm wrong, but I think this particular characterization of the Lebesgue integral is mentioned far more often when the subject is probability than when it's anything else. $\endgroup$ – Michael Hardy Apr 23 '14 at 18:01 Say you have a nonnegative function $f$. Let $M > 0$ be fixed and let $n \in \mathbb N$. Partition the range of $f$ into the sets $\left\{\dfrac{kM}{2^n} < f \le \dfrac{(k+1)M}{2^n}\right\}$ for $0 \le k \le 2^n - 1$, and $\{f > M\}$. Approximate the integral of $f$ by $$ \sum_{k=0}^{2^n - 1} \frac{kM}{2^n} \mu \left( \left\{ \frac{kM}{2^n} < f \le \frac{(k+1)M}{2^n} \right\} \right) + M \mu(\{f > M\}).$$ It isn't hard to see that this expression approximates the integral of $f$ because if $f$ is integrable then $M \mu(\{f > M\}) \to 0$ as $M \to \infty$, and the simple function $$\sum_{k=0}^{ 2^n - 1} \frac{kM}{2^n} \chi_{\{\frac{kM}{2^n} < f \le \frac{(k+1)M}{2^n}\}}(x)$$ approximates $f$ uniformly on the set $\{f \le M\}$. Define $a_k = \dfrac{kM}{2^n}$ and $b_k = \displaystyle \mu \left( \left\{ f > \frac{kM}{2^n} \right\} \right)$. The sum above may be written as $$\sum_{k=0}^{2^n - 1} a_k (b_k - b_{k+1}) + a_{2^n} b_{2^n}.$$ Now employ the summation-by-parts trick to find this equal to $$ \sum_{k=1}^{2^n} (a_k - a_{k-1}) b_k = \sum_{k=1}^{2^n} \frac{M}{2^n} \mu( \{f > kM/2^n\}).$$As $n \to \infty$, the latter integral converges to $$\int_0^M \mu(\{f > t\}) \, dt.$$ Finally let $M \to \infty$ to get the integral of $f$. Umberto P.Umberto P. $\begingroup$ Thanks. Do you have any thoughts on Wikipedia's intuitive justification of the definition? Why would the area of the infinitesimal horizontal strip between $t$ and $t+dt$ be $\mu \left (\{x\mid f(x)>t\} \right ) \,dt$? $\endgroup$ – Keshav Srinivasan Apr 23 '14 at 18:31 Umberto P.'s proof is quite rigorous, but I thought I'd try to give a slightly more intuitive (but less rigorous) explanation of how the Wikipedia definition follows from more common ways of thinking about the Lebesgue integral. Let's partition the range $[0,\infty)$ into $y_1,y_2,y_3,...$ (where $y_1=0$), and let's chooser a point $y_i^*$ from each subinterval $[y_i,y_{i+1}]$. Then we can approximate our function $f$ by assuming that all points $x$ which map to the interval $[y_i,y_{i+1}]$ only map to the point $y_i^*$, so that we're approximating $f$ by the simple function $\sum_{i=1}^{\infty} y_i^*\chi_{\{x:y_i\leq f(x)\leq y_{i+1}\}}$, and thus approximating the Lebesgue integral of $f$ by $\sum_{i=1}^{\infty} y_i^*\mu({\{x:y_i\leq f(x)\leq y_{i+1}\}})$. Now notice that ${\{x:y_i\leq f(x)\leq y_{i+1}\}}$ is the set of points for which $f(x)$ is between $y_i$ and $y_{i+1}$, or to put it another way $f(x)$ is greater than $y_i$ but is not greater than $y_{i+1}$, so we can express the set as ${\{x: f(x) > y_i\}}-{\{x: f(x) > y_{i+1}\}}$. Thus $\mu({\{x:y_i\leq f(x)\leq y_{i+1}\}})=\mu({\{x: f(x) > y_i\}})-\mu({\{x: f(x) > y_{i+1}\}})$, which by definition is equal to $f^*(y_i)-f^*(y_{i+1})$. (Note: I'm playing fast and loose with the distinction between less than and less than or equal to.) So we can rewrite our approximation of the Lebesgue integral as $\sum_{i=1}^{\infty} y_i^*(f^*(y_i)-f^*(y_{i+1})) = -\sum_{i=1}^{\infty} y_i^*(f^*(y_{i+1})-f^*(y_i))$. Now we can see that in the limit as the width of the intervals $[y_i,y_i+1]$ goes to zero (i.e. the mesh of the partition goes to zero), this expression becomes the Riemann-Stieltjes integral $-\int_0^\infty ydf^{*}(y)$. Since $f^*$ is monotone, it's of bounded variation, so we're allowed to apply integration by parts to it, so we get $-(\lim_{y\rightarrow\infty}(yf^*(y))-0f^*(0)-\int_0^\infty f^*(y)dy)$. I think we can show that if the Lebesgue integral of $f$ exists, then $\lim_{y\rightarrow\infty}(yf^*(y))=\lim_{y\rightarrow\infty}(y\mu({\{x: f(x) > y\}}))=0$. So all we're left with in the end is $\int_0^\infty f^*(y)dy$, which is the definition in the Wikipedia article. So at least that makes sense to my satisfaction. I still don't understand, however, Wikipedia's intuitive justification for the result, specifically why the area of the infinitesimal horizontal strip between $y=t$ and $y=t+dt$ is $\mu({\{x: f(x) > t\}})dt$. Forgive my MS Paint, and let $dt>0$ be a tiny change in height. I believe the Wikipedia argument can be intuited as follows. We approximate the integral of $\color{blue}f$ with the area of the $\color{red}{\text{rectangles}}$. $\color{green}{\text{(1)}} = \{x : f(x) > 0\} = [f>0]$. (This is our entire support.) The area of the first rectangle is $\mu(f>0)dt$. $\color{green}{\text{(2)}} = [f>dt]$. The area of the second collection of rectangles is $\mu(f>dt)dt$ In each case, note that the base of each rectangle at height $t$ is $[f>t]$. In the end we are left with $$\int f \approx \sum_{k=1}^\infty \mu(f>k dt)dt$$ and in the limit $dt\to 0$ we should expect to recover the form as claimed, $\int f = \int_0^\infty \mu(f>t)dt$. Calvin KhorCalvin Khor Not the answer you're looking for? Browse other questions tagged real-analysis measure-theory lebesgue-integral lebesgue-measure or ask your own question. An unusual definition of the Lebesgue integral Improper integral does not exists -> not Lebesgue-measurable? Definition of the Lebesgue integral in terms of simple functions with finite measure support Is this definition of Lebesgue integral problematic? How to prove an inequality of Lebesgue integral? New definition of Lebesgue integral Definition of the Lebesgue integral Evaluating Lebesgue integral on the unit ball Compute this limit using the relation between Riemann and Lebesgue integral Is this an alternative definition of the Lebesgue integral?
CommonCrawl
BioMedical Engineering OnLine Stiffness optimization and reliable design of a hip implant by using the potential of additive manufacturing processes Lena Risse ORCID: orcid.org/0000-0002-9320-74351,2, Steven Woodcock1,2, Jan-Peter Brüggemann3, Gunter Kullmer1,2 & Hans Albert Richard1,2 BioMedical Engineering OnLine volume 21, Article number: 23 (2022) Cite this article Due to the steadily increasing life expectancy of the population, the need for medical aids to maintain the previous quality of life is growing. The basis for independent mobility is a functional locomotor system. The hip joint can be so badly damaged by everyday wear or accelerated by illness that reconstruction by means of endoprostheses is necessary. In order to ensure a high quality of life for the patient after this procedure as well as a long service life of the prosthesis, a high-quality design is required, so that many different aspects have to be taken into account when developing prostheses. Long-term medical studies show that the service life and operational safety of a hip prosthesis by best possible adaptation of the stiffness to that of the bone can be increased. The use of additive manufacturing processes enables to specifically change the stiffness of implant structures. Reduced implant stiffness leads to an increase in stress in the surrounding bone and thus to a reduction in bone resorption. Numerical methods are used to demonstrate this fact in the hip implant developed. The safety of use is nevertheless ensured by evaluating and taking into account the stresses that occur for critical load cases. These results are a promising basis to enable longer service life of prostheses in the future. In Germany, approximately 210,000 initial implantations of total hip endoprostheses and 30,000 revision operations were carried out in 2011 [1]. In addition, about 125,000 knee prostheses were implanted annually [2]. Between 2007 and 2017, the number of implants increased by 30–40% [3]. This makes this surgical procedure one of the most common orthopedic treatments of our time [4]. The aim of the surgery is to improve the patient's quality of life by restoring freedom of movement in the affected joint and reducing pain [5]. Continuous research in the field of hip endoprosthetics has led to innovations in technology, materials science, surgical techniques and methods of fixation and sterilization, which have contributed to increasing the life span of implants. Today, 75% of implanted hip endoprostheses can remain in the body for up to 15 years [6, 7]. Despite the constant innovations, aseptic loosening of the prosthesis, caused by so-called "stress shielding", remains an existing problem [7, 8]. Because the prosthesis is much more rigid than the bone, there is a lack of stress in the contact zone between the prosthesis and the femur, so that the bone in the area of the prosthesis recedes [7]. Furthermore, the aforementioned difference in stiffness can lead to pain for the patient. Numerous attempts have been made to reduce the stiffness and eliminate the associated complications [9]. This can result in more complex geometries, such as lattice structures, so that modern manufacturing methods, like the selective laser melting (SLM) process [10], offer themselves for promising further research. In the context of this paper, the high design freedom of additive manufacturing processes in combination with computer aided engineering (CAE) methods is used to provide approaches to solve the existing stiffness problem in hip endoprosthetics. Using stress-adapted geometries and the finite element method, stiffness-adapted variants of a short shaft hip endoprosthesis are developed in an iterative process. Further optimization steps are continuously derived from the analysis of stresses and deformations in prosthesis and bone. The optimization goal is the reduction of the current stiffness and the resulting increase and homogenization of the stress in the surrounding bone. Furthermore, the focus is on improved fixation and durability with regard to the period of use as well as a compact, bone-saving design and direct force transmission. The relevance of adapting the implant stiffness to the possible time of implant use is visualized in Fig. 1. Implants of different stiffness placed in a schematic bone are subjected to bending loads and the resulting qualitative stress is compared to that of the healthy bone (Fig. 1a). In this context, blue areas reflect low stress and red areas high stress. Figure 1b illustrates the stress situation in the bone when using an implant that is too stiff. The entire load is carried by the implant, so that increased stress in the bone occurs only in the distal end, while the proximal part is free of stress. This effect, known as "stress shielding", leads to loosening of the prosthesis, since the unstressed bone is gradually degraded. An implant that is too flexible does not take up enough of the applied load, so that the entire bone is subjected to greater stress (Fig. 1c). An implant with a stiffness adapted to the bone has only a minor effect on the loading situation compared to healthy bone (Fig. 1d). Both the bone and the implant are stressed over the entire implant length. This leads to a good growth of the implant and avoids loosening. Qualitative numerical analysis to illustrate the influence of implant stiffness on the stress situation in the bone. (a) Healthy bone without implant. (b) Bone with too stiff implant ("stress shielding"). (c) Bone with implant that is too flexible. (d) Implant with adjusted stiffness Optimization results The optimization process is carried out by systematically changing the cross-sectional profile of the hip prosthesis for stepwise stiffness adjustment, which is shown in Fig. 2. Since the two load cases mainly cause bending stress, the moment of inertia Iy of the surface is decisive for the stiffness of the implant. The more material is placed far away from the neutral axis, the higher is the resulting moment of inertia. This can be illustrated using the formula for rectangular profiles: $$ I_{y} = \frac{{b \cdot h^{3} }}{12}, $$ Geometry adaptation for stiffness reduction. (a) Initial model (full rectangular profile, Imax = 4915 mm4, Imin = 2507 mm4) in partial section. (b) Optimized prosthesis with U-profile (Imax = 2953 mm4, Imin = 2276 mm4) as basic geometry in partial section. (c) Prosthesis with U-hollow profile (Imax = 1387 mm4, Imin = 1043 mm4) in partial section where b is the width of the profile parallel to the neutral axis and h is the height of the profile perpendicular to the neutral axis. The initial model is similar to the standard cross-section of commercially available short shaft hip endoprostheses. The nearly rectangular cross-section (Fig. 2a) has the highest moment of inertia of the three variants. A first reduction of the moment of inertia is achieved by changing the cross-section. Compared to the initial model, the U-profile has less material far away from the neutral layer (Fig. 2b). Since the size of the outer cross-section should not be reduced to ensure adequate anchoring in the bone, material inside the implant is removed to further reduce stiffness. The final basic geometry, designed as a U-profile with hollow chambers, is visualized in Fig. 2c. Since the shaft area of the prosthesis is to be considered in connection with the bone after implantation, the reduced torsional stiffness of the prosthesis due to the change in geometry is negligible. The changed cross-sectional geometry also provides a higher torsional stiffness, which enables a more solid anchorage in the femur. Final design using numerical methods The choice of a U-hollow profile with constant wall thickness is not appropriate with the aim of achieving the most homogeneous material utilization possible. Therefore, a variation of wall thickness is carried out in an iterative process. Numerical analyses are continuously used to check that the design is safe for use. In the shaft area, the wall thicknesses can be reduced so that the desired reduction in stiffness is achieved at the same time. In the neck area of the prosthesis, which is not implanted in the bone, a high degree of stiffness and load-bearing capacity is required to ensure that the function is fulfilled. Therefore, the wall thickness in the neck area is thicker. Finally, the stress-adjusted wall thickness dimensioning is shown in Fig. 3a results. Production-related restrictions in the SLM process prevent further reduction of the wall thickness in the distal shaft area. The numerical analysis for the critical load case stumbling illustrates compliance with the maximum permissible stress σzul of the prosthesis. Development of the final prosthesis variant under consideration of SLM-process-related restrictions. (a) Determination of a suitable wall thickness dimensioning. (b) Use of grid structures for local load-bearing capacity increase. The red circle indicates the region with reduced stresses because of the inner grid structure The optimization task can be considered in two parts. In the implanted stem area of the prosthesis, stiffness reduction is the primary goal, while in the more highly stressed neck area, the focus is on sufficiently high strength. In the highly loaded neck area of the prosthesis, a local stress increase is visible. To homogenize the stresses and to increase the load capacity, a grid structure is inserted locally in the high-stressed areas (Fig. 3b). By this procedure, the stress in the neck area can be reduced without significantly influencing the stiffness of the shaft and the prosthesis weight. The optimization measures carried out result in a reduction of the stiffness in the shaft area of the prosthesis. Furthermore, the local use of grid structures in the highly stressed neck area of the implant has increased its load-bearing capacity and reduced the resulting stresses. To validate the success of the stiffness reduction in the shaft area of the prosthesis, the loading situations within the contact surface of the femur at the beginning of the optimization process and at the end are compared for the load case stumbling in Fig. 4. Successful structural optimization leads to an increase in the stress on the bone tissue surrounding the prosthesis. Analysis of the change in the stress situation within the contact surface of the femur due to the stiffness adjustment. The red ellipses indicate the regions with increased stresses The reduction of the stress-free and low-stressed areas becomes visible in the view from anterior (front) as well as from posterior (back). The result is a more homogeneously stressed bone contact surface, which allows a more extensive transfer of stress to the bone and reduces the risk of bone degradation due to stress shielding. Experimental testing The short shaft hip endoprostheses are built up "standing" in order to keep the process-induced internal stresses in the component as small as possible in the case of the titanium aluminum alloy due to the smaller surface area to be exposed. In addition, the proportion of support structure is minimized in this way. In order to validate the design assumptions and verify the operational safety of the prosthesis, experimental tests were performed following the applicable testing standards. The experimental tests have been passed successfully. No visible deformation, no deformation affecting the test process or visible cracks occurred (see Fig. 5). Experimentally tested prototype. (a) Prosthesis embedded in resin. (b) Detailed view of the undamaged neck of the prosthesis after experimental testing. (c) Detailed view of the undamaged neck of the prosthesis after experimental testing from a different perspective The numerical design of the prototype developed can be described as reliable. This was also confirmed by experimental investigations. However, various questions still need to be clarified before it can actually be used in the human body. For example, powder removal, behavior in contact with human tissue and various further investigations are necessary in this context. Nevertheless, these results show promising potential for the use of selective laser melting to reduce the difference in stiffness between bone and implants and thus to reduce stress shielding and aseptic loosening. Many factors are relevant when it comes to the design and approval of novel implants. Despite the enormous effort involved, continuous further development is necessary to meet the requirements of an aging society. Additive manufacturing processes are a promising element in this further development. Thanks to the great freedom of design, it is possible to tailor geometries more closely to the actual application, so that the problem of stress shielding, among other things, can be effectively countered. Other studies have already exploited the possibilities of the SLM process for prosthetics [11, 12]. In this way, for example, porous structures could be created to promote the ingrowth of bone into the prosthetic structure. In the approach chosen here, this was not done because it makes the prosthesis more difficult to remove and revise, and young patients were chosen as the target group. The main aim of these investigations is to enable stiffness adjustment and the associated increase in stresses in the surrounding bone. The finite element method is a suitable tool to be able to investigate the mechanical effects of the changed prosthesis geometry. A variety of previous investigations [13,14,15] provide access to almost realistic simulation boundary conditions. In a study by Cilla et al. [16], a FE model with a complete femur including all joint and muscle forces is used to investigate the effects of prosthesis stem modifications to reduce stress shielding. The FE model used in this study (see Fig. 5) considers the femur above the knee joint and the joint and muscle forces applied at the proximal end. Although the model used here is not as sophisticated as the model used by Cilla, it is suitable for investigating the principal effects of stem modifications on the stresses on the bone in the contact area. Nevertheless, due to the high safety relevance, both extensive experimental and clinical studies are necessary to validate the results. The change in cross-section to a U-profile represents a new approach that should bring various advantages. On the one hand, the U-shape allows the reduction of the moment of inertia without the implantation area becoming too small. On the other hand, twisting of the implant after implantation is prevented and, in addition, a larger contact area between the prosthesis and the bone is created so that a better adhesion can be realized. Due to the cross-sectional size tapering in the distal direction, sinking of the prosthesis stem into the bone shaft is prevented. In addition, the increased contact surface between bone and implant results in a higher connection strength. In order to be able to verify the actual influence of the new implant geometry with adapted stiffness on the service life of the implant and the reduction of bone resorption, far-reaching clinical studies are necessary. However, the results of the numerical investigations are promising and the safety of use has already been confirmed experimentally. However, the experimental investigations only represent a kind of initial feasibility study. The basic operational reliability with regard to the assumed mechanical loads could be confirmed. The special conditions of use within the human body and other essential test criteria have not yet been investigated and evaluated as part of this study. Therefore, among other things further investigations are required to remove possible powder residues before use in living tissue. Within the scope of this article, a stiffness-adapted short shaft hip endoprosthesis could be developed by targeted use of the potentials of selective laser melting, in particular the possibility of creating filigree internal grid structures and variable wall thicknesses as well as internal cavities. By numerical analysis of the stress situations of bone and implant, the problem of "stress shielding" and thus potential problems of the patient could be reduced and the expected service life of the prosthesis increased. The stiffness-adjusted hip endoprostheses were checked for their operational reliability by numerical methods. The design was validated by experimental component tests according to the ISO testing standards. The findings on stiffness adjustment by exploiting the potential of selective laser melting can now be transferred to other components. Especially for implants, the problem of the stiffness difference between bone and implant is of immense importance, but also technical applications can profit from these considerations. Preliminary considerations In order to be able to carry out a systematic optimization process, some preliminary considerations are necessary. These concern on the one hand the desired requirements for the implant and the analysis of various factors influencing the duration of use, and on the other hand the derivation of further optimization steps on the basis of previous preliminary studies. The best possible observance and retention of the biomechanics in the hip joint and thus the avoidance of major impairments is a primary goal of artificial joint replacement. With its structure, the hip ensures the biomechanical function of enabling movements between the pelvis and the femur and at the same time ensuring the transmission of forces [17]. In order to achieve a good freedom of movement of the joint, the diameter of the femoral neck is smaller than that of the femoral head. Further mechanical parameters are influenced by the centrum–collum–diaphyseal angle (CCD angle). Depending on the angle, the loads acting on the hip joint change. Normally the CCD angle is 125° [17]. The femur is the largest bone in the human body and belongs to the group of long bones [18]. The tube-like bone shaft is made of a solid substance, the compacta. The bone ends consist of a spongy structure, the cancellous bone [18, 19]. The bone structure is always in continuous remodeling, so that optimal force absorption is guaranteed at all times. Less stressed bone material is reduced, while more stressed areas are strengthened [20, 21]. The hip joint is exposed to a wide range of stress situations in everyday life. When designing an artificial hip joint replacement, these load situations must be quantified in order to guarantee the safety of the implant. The load assumptions used are based on a study by Bergmann et al. [14]. In this context, a prosthesis stem was developed for data acquisition, which was equipped with appropriate measuring technology, including telemetric data transmission [14]. Within the scope of this article, two exemplary load cases for the development process are taken from this study. On the one hand, walking is considered as an everyday load on the hip joint for the design against failure due to fatigue. The contact force FK between the caput femoris (femoral head) and the acetabulum is 280% of body weight. On the other hand, the stumbling that causes the highest stress (870% of body weight) is used to exclude a forced fracture [14]. To determine the angle of force application α, the one-legged stance is used as a basis, since the stress on the hip joint is highest when only one leg is loaded. In this case, the resulting angle of force application α to the vertical is 16° [22]. The prosthesis should be designed for a middle-aged male patient (weight 79 kg). Accordingly, the contact force FK,walking = 2 170 N occurs during walking. When stumbling, forces of FK,stumbling = 6 742 N act [23]. Three factors have a major influence on the stability of an implant: the fit, the fixation and the stiffness. With regard to stability, the primary stability immediately after implantation and the stability after growth must be considered. Poor primary stability leads to micromovements of the prosthesis, resulting in pain for the patient. Poor stability after growth can result from bone resorption caused by inadequate load transfer to the bone [24]. A high fit (form fit between prosthesis and implant) has a positive effect on the primary stability, but a negative effect on the stability after growth. Accordingly, a suitable compromise must be chosen in this context. With regard to fixation in the femur, there are two variants: anchoring with bone cement and cementless anchoring. For younger patients, the cementless version is usually preferred due to numerous advantages, such as easier revision surgery and the avoidance of tissue damage caused by the cement polymer [25,26,27,28]. Fixation with bone cement has a positive effect on primary stability, but loosening symptoms may occur over time. The reduction of the stiffness to a value similar to that of the bone has positive effects on the primary and long-term stability of the hip implant. One way to vary the stiffness of the implant is the choice of the material. It must be ensured that the selected material not only provides the desired stiffness, but also guarantees the fulfilment of the function by sufficiently good mechanical characteristics. Furthermore, it has to be biocompatible. In this way, damage to the surrounding tissue due to sufficient chemical and biological compatibility of implant and body is excluded [29]. A material that meets the above-mentioned requirements and represents an alternative from a stiffness point of view is the titanium aluminum alloy Ti6-4, which can be processed reliably by selective laser melting. An overview of the mechanical material characteristics, determined on laser-melted test specimens, is shown in Table 1. Table 1 Material characteristics of the Ti6-4 alloy [30] A striking feature is the low Young's modulus (half the Young's modulus of steel) compared to other biocompatible metallic materials in combination with high strength values, which has a positive influence on the stiffness optimization of implants. Another advantageous aspect of this material is its good osteogenetic property. Since Ti6-4 is bioinert, no harmful interactions with the body's own tissue occur. In order to guarantee the safety of the implant, a strength and fatigue strength test is carried out. The basic statement of these two verifications is that the effective stresses in the component must always be less than the load-bearing capacity of the material [31]. For the verification that no failure due to plastic deformation occurs, von Mises equivalent stress is used for the load case stumbling. To determine the permissible stress on the material side, the yield strength is divided by a safety factor SF against plastic deformation [31]. Cyclic loads usually cause failure by fatigue crack growth. For this reason, when designing the prosthesis for the load case walking, the stresses are evaluated using the equivalent stress according to Navier, since cracks always grow perpendicular to the maximum principal stress [31]. The allowable stress is calculated taking into account the fatigue strength of the material, the technological size coefficient, the surface roughness and a safety factor. Boundary conditions for numerical simulation In order to achieve a typical average FE model for the femur that is as close to reality as possible CT data of the femur of different 40- to 60-year-old male patients are taken and transferred to 3D-volume models. The slice thickness and the cross-sectional resolution were isotropic and less than 1 mm in all CT images in each case. Using Materialise Mimics, CT data of different femurs were analyzed in terms of their density distribution and thus in terms of Young's modulus. The Young's modulus of the femur is assumed to be variable in order to best reflect the prevailing properties of the cortical and cancellous bone. Based on literature values and the results of the density distribution of various CT examinations, areas were defined in each case to which a specific Young's modulus was assigned. The cortical area is assigned a Young's modulus of 20 GPa, filled with bone marrow (1 MPa). In a transitional area between the cortical and cancellous bone, the Young's modulus gradually decreases until it varies between 100 and 2000 MPa in the cancellous head. However, different areas with different linear isotropic material properties are assumed to reduce the computing time while nevertheless realistic stiffness distributions were present. For the numerical simulations with the software Abaqus CAE 2017. The inner lattice structure was meshed with beam elements, all other components and prosthetic areas were meshed with quadratic tetrahedral elements (C3D10). The maximum size of the tetrahedral elements is defined as 1.5 mm. Cross-section transitions, notches and other areas with a high stress gradient are meshed correspondingly finer to avoid unwanted numerical errors. Due to the small deformations a geometrical linear calculation was carried out. The femoral stump used for the simulation is clamped firmly in the anatomically correct position at its end (Fig. 6a). Its position is tilted by 9° in lateral direction. Furthermore, the collum axis of the femoral head is rotated 12° in an anterior direction with respect to the condylar axis of the distal femur (Fig. 6c) and is described by the antetorsion angle [32]. In the Finite Element (FE) study, the situation after complete healing and attachment of the bone to the prosthesis is considered. The contact situation of prosthesis and surrounding bone is therefore modeled as tie-constraint. Furthermore, the aim is to find measures to modify the shaft of the prosthesis to avoid stress shielding on the bone contact surface. Thus, regions with too low stresses on the bone contact surface are unwanted. Preliminary studies with different frictional coefficients ranging from 0.1 to 0.8 showed that the stresses on the bone surface decrease slightly with increasing frictional coefficient. Since higher stresses are wanted and tie contact is the upper limit of frictional contact, tie contact represents the worst case to reach the aim. Also for this reason tie contact is chosen to check that with the selected measures to alter the shaft of the prosthesis even the lowest possible stresses on the bone surface are high enough to avoid stress shielding. Boundary conditions and influencing factors for numerical analysis. (a) Boundary and constraint conditions for the FE model. (b) Details of the load application points A-A. (c) Alignment of the femur according to the anatomical axes. (d) Relevance of CCD angle The respective joint contact forces in x- and z-direction (FK-x, FK-z) are applied as distributed forces in the contact area (923 mm2) via the ball head of the prosthesis. An additional load is added by the abductor muscle group (FM-x, FM-z) (Fig. 6b). Gluteus abductors reduce the extension load in the proximal part of the femoral neck to such an extent that there are effects on the subsequent design of a prosthesis in terms of its stiffness [33, 34]. The amount of muscle force applied is 1.1 times the body weight [35]. To take into account the natural anatomical structure, various dimensions as well as the geometric shape of the prosthesis are relevant; see Fig. 6. The prosthesis is intended to replicate the healthy femur as closely as possible, for example, compliance with the existing CCD angle is relevant (Fig. 6d). The basic geometry developed in the context of this article can be adapted to any variation of the CCD angle. Further relevant dimensions are the head and neck diameter of the prosthesis as well as the cone dimensions. They are chosen with regard to [10] to ensure the patient's freedom of movement, a sufficient joint stability and a permanent and stable fit of the connection between ball head and prosthesis. The optimization of the prosthesis geometry is carried out with the aid of CAD and FEM software. No optimization software is used; instead, knowledge of technical mechanics, structural mechanics and biomechanics is incorporated into the manual optimization process. Manufacturing and experimental testing For the production of optimized hip endoprostheses, a process that offers a high degree of design freedom is required. Selective laser melting is chosen as it empowers the production of both filigree, internal and complex geometries [36]. Thus, almost no geometric restrictions have to be taken into account for the optimization process and the optimization success is not impaired by the choice of the manufacturing process. The selected material Ti6-4 can be processed reliably on the SLM280 2.0 machine with standard parameters for this material and is approved for the production of implants. Since the adhesion of bone is enhanced by microporosities on the surfaces of implants, the choice of the SLM process can additionally be seen as positive. The non-implanted neck and head area of the prosthesis can be polished after fabrication to ensure improved fatigue properties as it increases the surface condition coefficient. Since not perfectly isotropic material properties result from the SLM process, the material parameters for the design direction with the lowest mechanical properties were selected for determining the maximum allowable stresses in the prosthesis and nevertheless isotropic material is assumed in the FE simulation. Experimental investigations are carried out to validate the numerically determined operational reliability. The operational reliability of the implant is determined numerically beforehand. For this purpose, the maximum permissible stresses are determined in advance by means of a fatigue strength verification. In addition to a conservative safety factor, a conservative estimate of the surface quality and the corresponding reduction factor are used to plan a further safety reserve. Thus, a fatigue-resistant design should be ensured and no damage or plastic deformation should occur during use. First of all, the selected load assumptions based on real measurements are used for the experimental tests. International standards have been published in order to guarantee a standardized testing of this medical product. ISO 7206: Implants for surgery—Partial and total hip joint prostheses describes in a total of ten documents the requirements for experimental tests of hip prostheses, which they must pass before the start of a clinical study. Part 4: Determination of endurance properties and performance of stemmed femoral components [37] and Part 6: Endurance properties testing and performance requirements of neck region of stemmed femoral components [38] are particularly relevant for testing the optimized short stem hip endoprosthesis. Based on the specifications from the ISO standards, two devices for the experimental testing of the optimized hip endoprosthesis are being developed, which are compatible with the available testing machine. This is an in-house development by the Institute of Applied Mechanics for carrying out experimental investigations on various additively manufactured components. With the aid of a linear motor, static and cyclic test forces can be applied and the component behavior recorded The prosthesis is embedded in an epoxy resin for the cyclic tests. The positioning and embedding depth of the prosthesis for the experimental investigations were ensured with the aid of an embedding device. The required boundary conditions for the experimental test are also clearly defined in ISO7206-4 and ISO 7206–6 and were replicated as accurately as possible for these experimental studies. The datasets generated during and/or analyzed during the current study available from the corresponding author on reasonable request. CAD: CAE: Computer aided engineering CCD: Centrum–collum–diaphyseal Computer tomography FEM: SLM: Selective laser melting Schmitt-Sody M, Pilger V, Gerdesmeyer L. Rehabilitation und Sport nach Hüfttotalendoprothese. Orthopade. 2011;40:513–9. Wirtz DC, editor. AE-Manual der Endoprothetik—Knie. Heidelberg: Springer Verlag; 2011. OECD. Health at a glance 2019: OECD indicators. Paris: OECD Publishing; 2019. Kirschner P. Hüftendoprothetik. Chirurg. 2005;76:95–104. Reimeringer M, Nuno N. The influence of contact ratio and its location on the primary stability of cementless total hip arthroplasty: a finite element analysis. J Biomech. 2016;49:1064–70. Derar H, Shahinpoor M. Recent patents and designs on hip replacement prostheses. Open Biomed Eng J. 2015;9:92–102. Bieger R, Ignatius A, Decking R, Claes L. Primary stability and strain distribution of cementless hip stems as a function of implant design. Clin Biomech. 2012;27:158–64. Murr L, Martinez E, Gaytan S, Medina F. Next generation orthopaedic implants by additive manufacturing using electron beam melting. J Biomech. 2012. https://doi.org/10.1155/2012/245727. Arabnejad S, Johnston B, Tanzer M, Pasini D. Fully porous 3D printed titanium femoral stem to reduce stress-shielding following total hip arthroplasty. J Orthop Res. 2016. https://doi.org/10.1002/jor.23445. Gebhardt A, Kessler J, Thurn L. 3D-Drucken—Grundlagen und Anwendungen des Additive Manufacturing (AM). 2nd ed. München: Carl Hanser Verlag; 2016. Wirtz TP. Herstellung von Knochenimplantaten aus Titanwerkstoffen durch Laserformen. Dissertation, RWTH Aachen, Aachen. 2005. Sahal M, Chen MT, Sharma S, Nair SS, Nair VG. 3DP materials and methods for orthopedic, dental and maxillofacial implants: a brief comparative report. J 3D Print Med. 2019;3:127–34. Byrne DP, Mulhall KJ, Baker JF. Anatomy and biomechanics of the hip. Open Sports Med J. 2010;4:51–7. Bergmann G, Deuretzbacher G, Heller M, Graichen F, Rohlmann A, Strauss J, Duda GN. Hip contact forces and gait patterns from routine activities. J Biomech. 2001;34:859–971. Bergmann G, Graichen F, Rohlmann A. Hip joint contact forces during stumbling. Langenbecks Arch Surg. 2004;389:53–9. Cilla M, Checa S, Duda GN. Strain shielding inspired re-design of proximal femoral stems for total hip arthroplasty. J Orthop Res. 2017;35:2534–44. Claes L, Kirschner P, Perka C, Rudert M. AE-manual der Endoprothetik—Hüfte und Hüftrevision. Arbeitsgemeinschaft Endoprothetik, Berlin, Heidelberg, 2012. Tittel K. Beschreibende und funktionelle Anatomie des Menschen. 12th ed. Jena: Fischer; 1994. Aumüller G, Wurzinger LJ. Anatomie—208 Tabellen. Ed. 2, Thieme, Stuttgart, 2010. Richard HA, Kullmer G. Biomechanik: Anwendungen mechanischer Prinzipien auf den menschlichen Bewegungsapparat. Wiesbaden: Springer Fachmedien Wiesbaden; 2020. https://doi.org/10.1007/978-3-658-28333-9. Martinez-Reina J, Garcia-Aznar JM, Dominguez J, Doblare M. A bone remodelling model including the directional activity of BMU's. Biomech Model Mechanobiol. 2009;8:111–27. Pauwels F. Biomechanics of the normal and diseased hip: theoretical foundation and results of treatment; an atlas. Berlin: Springer Verlag; 1976. Julius Wolff Institute: Orthoload Database, 2021. Brown TE, Larson B, Shen F, Moskal JT. Thigh pain after cementless total hip arthroplasty: evaluation and management. J Am Acad Orthop Surg. 2002;10:385–92. Diehl P, Haenle M, Bergschmidt P, Gollwitzer H, Schauwecker J, Bader R, Mittelmeier W. Zementfreie Hüftendoprothetik: eine aktuelle Übersicht. BiomedTech. 2010;55:251–64. Gulow J, Scholz R, von Salis-Soglio G. Kurzschäfte in der Hüftendoprothetik. Orthopäde. 2007;36:353–9. Jerosch J, von Engelhardt LV. Kurzschaft ist nicht gleich Kurzschaft—wo liegen die Unterschiede in der Verankerung und der Biomechanik. Z Orthop Unfall. 2019;157:548–57. Westphal FM, Bishop N, Püschel K, Morlock MM. Biomechanics of a new short-stemmed uncemented hip prosthesis: an in-vitro study in human bone. Hip Int. 2006;16:22–30. Geetha M, Singh AK, Asokamani R, Gogia AK. Ti based biomaterials, the ultimate choice for orthopaedic implants—a review. Prog Mater Sci. 2009;54:397–425. Leuders S, Thöne M, Riemer A, Niendorf T, Tröster T, Richard HA, Maier HJ. On the mechanical behaviour of titanium alloy TiAl6V4 manufactured by selective laser melting: fatigue resistance and crack growth performance. Int J Fatigue. 2013;48:300–7. https://doi.org/10.1016/j.ijfatigue.2012.11.011. Richard HA, Sander M. Fatigue crack growth. Detect–access–avoid. Switzerland: Springer Nature; 2016. Schünke M, Schulte E, Schumacher U. Prometheus—LernAtlas der Anatomie—Allgemeine Anatomie und Bewegungssystem; 182 Tabellen. Thieme, Stuttgart, 2nd edn, 2007.. Stolk J, Verdonschot N, Huiskes R. Hip-joint and abductor-muscle forces adequately represent in vivo loading of a cemented total hip reconstruction. J Biomech. 2001;34:917–26. Aversa R, Florian IT, Petrescu RV, Petrescu V, Apicella A. Flexible stem trabecular prostheses. Am J Eng Appl Sci. 2016;9(4):1213–21. https://doi.org/10.3844/ajeassp.2016.1213.1221. Park Y, Albert C, Yoon YS, Fernlund G, Frei H, Oxland T. The effect of abductor muscle and anterior-posterior hip contact load simulation on the in-vitro primary stability of a cementless hip stem. J Orthop Res. 2010;5:1–14. Gebhardt A. Understanding additive manufacturing: rapid prototyping—rapid tooling—rapid manufacturing. München: Carl Hanser Verlag; 2012. ISO 7206-4:2010-06. Implants for surgery—partial and total hip joint prostheses—Part 4: determination of endurance properties and performance of stemmed femoral components. ISO 7206-6:2013-11. Implants for surgery—partial and total hip joint prostheses—part 6: Endurance properties testing and performance requirements of neck region of stemmed femoral components. Open Access funding enabled and organized by Projekt DEAL. Institute of Applied Mechanics, Paderborn University, Pohlweg 47-49, 33098, Paderborn, Germany Lena Risse, Steven Woodcock, Gunter Kullmer & Hans Albert Richard Direct Manufacturing Research Center, Paderborn University, Pohlweg 47-49, 33098, Paderborn, Germany Advanced Mechanical Engineering GmbH, Carlo-Schmid-Allee 3, 44263, Dortmund, Germany Jan-Peter Brüggemann Lena Risse Steven Woodcock Gunter Kullmer Hans Albert Richard LR: conceptualization, methodology, writing—original draft, visualization, SW: methodology, software, visualization, investigation; JPB: methodology, writing—review and editing; GK: supervision, validation; HAR: supervision. All authors read and approved the final manuscript. Correspondence to Lena Risse. Risse, L., Woodcock, S., Brüggemann, JP. et al. Stiffness optimization and reliable design of a hip implant by using the potential of additive manufacturing processes. BioMed Eng OnLine 21, 23 (2022). https://doi.org/10.1186/s12938-022-00990-z Accepted: 07 March 2022 Structural optimization Hip implant Submission enquiries: [email protected]
CommonCrawl
Additive noise Revision as of 16:09, 1 April 2020 by Ulf Rehmann (talk | contribs) (tex encoded by computer) An interference added to the signal during its transmission over a communication channel. More precisely, one says that a given communication channel is a channel with additive noise if the transition function $ Q(y, \cdot ) $ of the channel is given by a density $ q(y, \widetilde{y} ) $, $ y \in {\mathcal Y} $, $ \widetilde{y} \in \widetilde {\mathcal Y} = {\mathcal Y} $( $ {\mathcal Y} $ and $ \widetilde {\mathcal Y} $ are the spaces of the values of the signals at the input and output of the channel, respectively) depending only on the difference $ \widetilde{y} - y $, i.e. $ q(y, \widetilde{y} ) = q( \widetilde{y} -y) $. In this case the signal $ \widetilde \eta $ at the output of the channel can be represented as the sum of the input signal $ \eta $ and a random variable $ \zeta $ independent of it, called additive noise, so that $ \widetilde \eta = \eta + \zeta $. If one considers channels with discrete or continuous time over finite or infinite intervals, the notion of a channel with additive noise is introduced by the relation $ \widetilde \eta (t) = \eta (t) + \zeta (t) $, where $ t $ is in the given interval, $ \eta (t) $, $ \widetilde \eta (t) $ and $ \zeta (t) $ are random processes representing the signals at the input and the output of the channel with additive noise, respectively; moreover, the process $ \zeta (t) $ is independent of $ \eta (t) $. In particular, if $ \zeta (t) $ is a Gaussian random process, then the considered channel is called a Gaussian channel. [1] R. Gallager, "Information theory and reliable communication" , McGraw-Hill (1968) [2] A.A. Kharkevich, "Channels with noise" , Moscow (1965) (In Russian) More generally, especially in system and control theory and stochastic analysis, the term additive noise is used for describing the following way noise enters a stochastic differential equation or observation equation: $ d x = f ( x , t ) d t + d w $, $ d y = h ( x , t ) d t + d v $, where $ w $ and $ v $ are Wiener noise processes. The general situation of a stochastic differential equation of the form $ d x = f ( x , t ) d t + g ( x , t ) d w $ is referred to as having multiplicative noise. Additive noise. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Additive_noise&oldid=18399 This article was adapted from an original article by R.L. DobrushinV.V. Prelov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Additive_noise&oldid=45027" TeX auto TeX done
CommonCrawl
A remark on a Liouville problem with boundary for the Stokes and the Navier-Stokes equations Existence and uniqueness of time-periodic solutions to the Navier-Stokes equations in the whole plane October 2013, 6(5): 1259-1275. doi: 10.3934/dcdss.2013.6.1259 $H^{\infty}$-calculus for a system of Laplace operators with mixed order boundary conditions Matthias Geissert 1, , Horst Heck 1, and Christof Trunk 1, TU Darmstadt, FB Mathematik, Schlossgartenstr 7, D-64289 Darmstadt, Germany, Germany, Germany Received January 2012 Revised February 2012 Published March 2013 In this paper we prove that the $L^p$ realisation of a system of Laplace operators subjected to mixed first and zero order boundary conditions admits a bounded $H^{\infty}$-calculus. Furthermore, we apply this result to the Magnetohydrodynamic equation with perfectly conducting wall condition. Keywords: $H^\infty$-calculus, MHD system., Hodge boundary condition, Laplace Operator. Mathematics Subject Classification: Primary: 35K51; Secondary: 76W0. Citation: Matthias Geissert, Horst Heck, Christof Trunk. $H^{\infty}$-calculus for a system of Laplace operators with mixed order boundary conditions. Discrete & Continuous Dynamical Systems - S, 2013, 6 (5) : 1259-1275. doi: 10.3934/dcdss.2013.6.1259 H. Abels, Bounded imaginary powers and $H_\infty$-calculus of the Stokes operator in unbounded domains, in "Nonlinear Elliptic and Parabolic Problems," Progr. Nonlinear Differential Equations Appl., 64, Birkhäuser, Basel, (2005), 1-15. doi: 10.1007/3-7643-7385-7_1. Google Scholar H. Abels, Nonstationary Stokes system with variable viscosity in bounded and unbounded domains, Discrete Contin. Dyn. Syst. Ser. S, 3 (2010), 141-157. doi: 10.3934/dcdss.2010.3.141. Google Scholar T. Akiyama, H. Kasai, Y. Shibata and M. Tsutsumi, On a resolvent estimate of a system of Laplace operators with perfect wall condition, Funkcial. Ekvac., 47 (2004), 361-394. doi: 10.1619/fesi.47.361. Google Scholar J. Bolik and W. von Wahl, Estimating $\nablau$ in terms of div $u$, curl $u$ either $(\nu,u)$ or $\nu \times u$ and the topology, Math. Methods Appl. Sci., 20 (1997), 737-744. doi: 10.1002/(SICI)1099-1476(199706)20:9<737::AID-MMA863>3.3.CO;2-9. Google Scholar M. Cowling, I. Doust, A. McIntosh and A. Yagi, Banach space operators with a bounded $H^{\infty}$ functional calculus, J. Austral. Math. Soc. Ser. A, 60 (1996), 51-89. Google Scholar T. G. Cowling, "Magnetohydrodynamics," Interscience Tracts on Physics and Astronomy, No. 4, Interscience Publishers, Inc., New York, 1957. Google Scholar R. Denk, G. Dore, M. Hieber, J. Prüss and A. Venni, New thoughts on old results of R. T. Seeley, Math. Ann., 328 (2004), 545-583. doi: 10.1007/s00208-003-0493-y. Google Scholar E. Dintelmann, M. Geissert and M. Hieber, Strong $L^p$-solutions to the Navier-Stokes flow past moving obstacles: The case of several obstacles and time dependent velocity, Trans. Amer. Math. Soc., 361 (2009), 653-669. doi: 10.1090/S0002-9947-08-04684-9. Google Scholar R. Denk, M. Hieber and J. Prüss, $\mathcal R$-boundedness, Fourier multipliers and problems of elliptic and parabolic type, Mem. Amer. Math. Soc., 166 (2003), viii+114. Google Scholar G. Dore and A. Venni, On the closedness of the sum of two closed operators, Math. Z., 196 (1987), 189-201. doi: 10.1007/BF01163654. Google Scholar M. Haase, "The Functional Calculus for Sectorial Operators," Operator Theory: Advances and Applications, 169, Birkhäuser Verlag, Basel, 2006. doi: 10.1007/3-7643-7698-8. Google Scholar P. C. Kunstmann, $H^{\infty}$-calculus for the Stokes operator on unbounded domains, Arch. Math. (Basel), 91 (2008), 178-186. doi: 10.1007/s00013-008-2621-0. Google Scholar N. Kalton, P. Kunstmann and L. Weis, Perturbation and interpolation theorems for the $H^\infty$-calculus with applications to differential operators, Math. Ann., 336 (2006), 747-801. doi: 10.1007/s00208-005-0742-3. Google Scholar L. D. Landau and E. M. Lifschitz, "Lehrbuch der Theoretischen Physik ('Landau-Lifschitz'), Band VIII," Fourth edition, Elektrodynamik der Kontinua [Electrodynamics of continua], Translated from the second Russian edition by S. L. Drechsler, Translation edited by Gerd Lehmann, With a foreword by P. Ziesche and Lehmann, Akademie-Verlag, Berlin, 1985, Google Scholar A. McIntosh, Operators which have an $H_\infty$ functional calculus, in "Miniconference on Operator Theory and Partial Differential Equations" (North Ryde, 1986), Proc. Centre Math. Anal. Austral. Nat. Univ., 14, Austral. Nat. Univ., Canberra, (1986), 210-231. Google Scholar M. Mitrea and S. Monniaux, On the analyticity of the semigroup generated by the Stokes operator with Neumann-type boundary conditions on Lipschitz subdomains of Riemannian manifolds, Trans. Amer. Math. Soc., 361 (2009), 3125-3157. doi: 10.1090/S0002-9947-08-04827-7. Google Scholar A. Noll and J. Saal, $H^\infty$-calculus for the Stokes operator on $L_q$-spaces, Math. Z., 244 (2003), 651-688. Google Scholar R. T. Seeley, Complex powers of an elliptic operator, in "Singular Integrals" (Proc. Sympos. Pure Math., Chicago, Ill., 1966), Amer. Math. Soc., Providence, R.I., (1967), 288-307. Google Scholar R. Seeley, The resolvent of an elliptic boundary problem, Amer. J. Math., 91 (1969), 889-920. Google Scholar R. Seeley, Norms and domains of the complex powers $A_Bz$, Amer. J. Math., 93 (1971), 299-309. Google Scholar J. A. Shercliff, "A Textbook of Magnetohydrodyamics," Pergamon Press, Oxford-New York-Paris, 1965. Google Scholar L. Weis, Operator-valued Fourier multiplier theorems and maximal $L_p$-regularity, Math. Ann., 319 (2001), 735-758. doi: 10.1007/PL00004457. Google Scholar Z. Yoshida and Y. Giga, On the Ohm-Navier-Stokes system in magnetohydrodynamics, J. Math. Phys., 24 (1983), 2860-2864. doi: 10.1063/1.525667. Google Scholar Rama Ayoub, Aziz Hamdouni, Dina Razafindralandy. A new Hodge operator in discrete exterior calculus. Application to fluid mechanics. Communications on Pure & Applied Analysis, 2021, 20 (6) : 2155-2185. doi: 10.3934/cpaa.2021062 Marek Fila, Kazuhiro Ishige, Tatsuki Kawakami. Convergence to the Poisson kernel for the Laplace equation with a nonlinear dynamical boundary condition. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1285-1301. doi: 10.3934/cpaa.2012.11.1285 Yoshitsugu Kabeya. Eigenvalues of the Laplace-Beltrami operator under the homogeneous Neumann condition on a large zonal domain in the unit sphere. Discrete & Continuous Dynamical Systems, 2020, 40 (6) : 3529-3559. doi: 10.3934/dcds.2020040 Jaeyoung Byeon, Sangdon Jin. The Hénon equation with a critical exponent under the Neumann boundary condition. Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4353-4390. doi: 10.3934/dcds.2018190 Dorina Mitrea, Marius Mitrea, Sylvie Monniaux. The Poisson problem for the exterior derivative operator with Dirichlet boundary condition in nonsmooth domains. Communications on Pure & Applied Analysis, 2008, 7 (6) : 1295-1333. doi: 10.3934/cpaa.2008.7.1295 Kazuo Yamazaki. $(N-1)$ velocity components condition for the generalized MHD system in $N-$dimension. Kinetic & Related Models, 2014, 7 (4) : 779-792. doi: 10.3934/krm.2014.7.779 Radosław Kurek, Paweł Lubowiecki, Henryk Żołądek. The Hess-Appelrot system. Ⅲ. Splitting of separatrices and chaos. Discrete & Continuous Dynamical Systems, 2018, 38 (4) : 1955-1981. doi: 10.3934/dcds.2018079 Haiyang He. Asymptotic behavior of the ground state Solutions for Hénon equation with Robin boundary condition. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2393-2408. doi: 10.3934/cpaa.2013.12.2393 Sergey P. Degtyarev. On Fourier multipliers in function spaces with partial Hölder condition and their application to the linearized Cahn-Hilliard equation with dynamic boundary conditions. Evolution Equations & Control Theory, 2015, 4 (4) : 391-429. doi: 10.3934/eect.2015.4.391 Muhammad Bilal Riaz, Syed Tauseef Saeed. Comprehensive analysis of integer-order, Caputo-Fabrizio (CF) and Atangana-Baleanu (ABC) fractional time derivative for MHD Oldroyd-B fluid with slip effect and time dependent boundary condition. Discrete & Continuous Dynamical Systems - S, 2021, 14 (10) : 3719-3746. doi: 10.3934/dcdss.2020430 Sébastien Court. Stabilization of a fluid-solid system, by the deformation of the self-propelled solid. Part II: The nonlinear system.. Evolution Equations & Control Theory, 2014, 3 (1) : 83-118. doi: 10.3934/eect.2014.3.83 Sébastien Court. Stabilization of a fluid-solid system, by the deformation of the self-propelled solid. Part I: The linearized system.. Evolution Equations & Control Theory, 2014, 3 (1) : 59-82. doi: 10.3934/eect.2014.3.59 M. S. Mahmoud, P. Shi, Y. Shi. $H_\infty$ and robust control of interconnected systems with Markovian jump parameters. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 365-384. doi: 10.3934/dcdsb.2005.5.365 Amol Sasane. Extension of the $\nu$-metric for stabilizable plants over $H^\infty$. Mathematical Control & Related Fields, 2012, 2 (1) : 29-44. doi: 10.3934/mcrf.2012.2.29 Antonio Vitolo. $H^{1,p}$-eigenvalues and $L^\infty$-estimates in quasicylindrical domains. Communications on Pure & Applied Analysis, 2011, 10 (5) : 1315-1329. doi: 10.3934/cpaa.2011.10.1315 Jamal Mrazgua, El Houssaine Tissir, Mohamed Ouahi. Frequency domain $ H_{\infty} $ control design for active suspension systems. Discrete & Continuous Dynamical Systems - S, 2022, 15 (1) : 197-212. doi: 10.3934/dcdss.2021036 O. A. Veliev. On the spectrality and spectral expansion of the non-self-adjoint mathieu-hill operator in $ L_{2}(-\infty, \infty) $. Communications on Pure & Applied Analysis, 2020, 19 (3) : 1537-1562. doi: 10.3934/cpaa.2020077 Felix Sadyrbaev. Nonlinear boundary value problems of the calculus of variations. Conference Publications, 2003, 2003 (Special) : 760-770. doi: 10.3934/proc.2003.2003.760 Nikos Katzourakis. Nonuniqueness in vector-valued calculus of variations in $L^\infty$ and some Linear elliptic systems. Communications on Pure & Applied Analysis, 2015, 14 (1) : 313-327. doi: 10.3934/cpaa.2015.14.313 Matthias Geissert Horst Heck Christof Trunk
CommonCrawl
IZA Journal of Labor & Development Court-ship, kinship and business: a study on the interaction between the formal and the informal institutions and its effect on entrepreneurship Tanika Chakraborty1, Anirban Mukherjee2 & Sarani Saha1 IZA Journal of Labor & Development volume 4, Article number: 7 (2015) Cite this article In this paper we theoretically and empirically examine how the interaction between the formal court system and the informal loan network affects a household's decision to start a business. We find that when the formal court system is weak, expansion of informal credit network leads to the proliferation of business. However, with a sufficiently strong court system, expansion of the credit network has a negative effect on business prospects. This result is explained by the contradictions between formal laws and norms used by informal networks. JEL codes K12; L26; O17 Effective contract enforcement is the key to the process of economic development. A contract can be enforced by the formal legal court or by informal community courts, for instance panchayats in South Asia. In less developed societies both types of institutions co-exist, often coming in each others way. These conflicts are well documented in the context of the marriage market and common property management (Chowdhry 2004; Keremane et al. 2006; Madsen 1991; Nagraj 2010; Yadav 2009). However, till date there has not been any study that analyzes and estimates the effect of such interactions on economic decision making. In this paper we look at the effect of the interaction between the informal loan network and formal court system on the decision to run a business using both analytical and quantitative methods. We find that the informal network helps in business proliferation when the formal court system is weak. However, when the formal court system improves sufficiently, business might fall in areas with strong informal networks. Our paper is related to a vast body of literature that studies the effect of institutions on economic development. There is a consensus among economists that better institutions encourage capital accumulation and subsequent growth (Acemoglu et al. 2001, 2002; Rajan and Zingales 1998). However, the existing empirical literature on the effectiveness of formal institutions mostly look at the institutions of property rights which prevent the elites from expropriating. One exception is Acemoglu and Johnson (2003), who distinguish between the effects of property rights institutions and contracting institutions on growth. Using a cross country data set, they find that while good property rights institutions have a positive effect on growth, the effect of contracting institutions is not robust. This result is counter intuitive, and one possible reason could be that their data, which only measures the quality of formal contracting institutions, fails to account for the role of informal network based institutions, ubiquitous in many developing countries. Evidence shows that in the absence of effective formal courts of law, business often thrives under the informal institutions (Biggs and Shah 2006; McMillan and Woodruff 1999). In a related paper, Harriss-White (2010) finds that in absence of effective formal institutions, Indian SMEs are largely regulated by what she calls "social regulation." This is nothing but informal institutions working through community networks and reputation mechanisms. The caste system prevailing in India can also be seen as a grand framework of contract enforcement using the reputation mechanism (Freitas 2006). The key to the success of such reputation based mechanisms is information about one's past action (therefore reputation) flowing in the community network (Ghosh and Ray 1996; Rosenthal and Landau 1979; Kandori 1992). Many credit institutions in less developed countries, such as ROSCA in East Asia (Besley et al. 1993) and Grameen Bank in Bangladesh (Ghatak 1991), crucially depend on such information flow within communities. The use of community level information for enforcing contracts was also ubiquitous in medieval Europe (Greif et al. 1994; Greif 1993; Slivinski and Sussman 2009). Besides the general literature on institutions and its impact on economic growth, this paper is also related to the role of networks in credit provisioning. Network membership, which is often characterized by caste or ethnicity, may work both in positive and negative ways. A number of studies found in the African context that community membership can increase the probability of getting a loan if one's own community controls the supply of loans (Biggs and Srivastava 2002; Fafchamps 2000,2003, Fisman 2003; Gajigo and Foltz 2010). On the other hand, it may decrease the probability of getting loans if the credit granting authority has any negative bias towards the credit applicants ethnicity. This result has been confirmed by various studies in the context of the United States (Blanchflower et al. 2003; Fairlie and Robb 2007). However, most of the literature on institutions look at formal and informal institutions as separate phases of development – the informal system getting replaced by the formal ones in due course of development (La Porta and Shleifer 2014). At best, some authors have adopted a dual sector approach – making formal and informal two parallel sets of rules without interfering with one another (Straub 2005). But in reality, formal and informal institutions interact and mutually constitute each other. Evidence suggests that social capital affects formal economic behavior like financial decisions. Luigi et al. (2004) find that in Italy people are more likely to use formal checks, invest less in cash and more in stocks, have higher access to institutional credit and make less use of informal credit in areas of high social capital. The effect of social capital is stronger in areas with weaker legal enforcements. In a similar line of research, Karlan (2005) uses an experimental approach to find the effect of social capital on financial decisions. The interaction between formal institutions and informal norms also plays a role in the management of common property resources. For example, Sandner (2003) looked at the interaction between formal institutions and norms of the Kuna community in Central America for preservation of marine resources. He shows that erosion of norms and insufficient development of formal institutions can lead to over exploitation of marine resources. The interaction between formal and informal institutions is particularly important in less developed countries. In these countries de facto practices are quite different from de jure rules – and these differences are often shaped by the interaction between formal and informal institutions. The only theoretical exposition of such interactions that we have come across is Dixit (2004) where he argues that the development of the formal may have a detrimental effect on the informal mechanisms. The informal system relies heavily on the reputation mechanism, where someone with a reputation of cheating does not get a job within his/her community. People using formal contracts however do not care about reputation – punishment under formal contracting is direct and enforced by a third party (fine, imprisonment). Hence one can always cheat someone using the informal contract and then find their next employment with another employer using a formal contract. On the empirical front there has been much less research on this issue. One of the few papers related to relevance of institutions in affecting business decisions is Chemin (2012). He finds that reforms in the civil court procedure leads to lower breaches of contract, higher access to capital and building of new capacity in India. However, what Chemin finds is an average effect of more efficient courts. His research does not answer whether the effect is different for areas with different initial conditions in terms of informal institutions (such as the presence of caste panchayat). We claim that the effect of better legislation on business decisions will critically depend on these initial conditions. Another closely related paper is Klapper et al. (2006). Their study, based on 34 Eastern and Western European countries, find that higher requirements to comply with formal bureaucratic regulation prevents new businesses from entering the industry by increasing entry cost. The main contention of our paper is that formal and informal institutions might come in the way of each other, producing undesired results in places where traditional, community based dispute resolving systems are widespread. We define a business in terms of a contract where a contractor agrees to supply certain inputs to an entrepreneur. The quality of the input cannot be verified beforehand or by any third party, creating a possible moral hazard problem. The only way to punish a cheater contractor is to fire him and deny him any future employment opportunity. Hence, we have a structure similar to Shapiro and Stiglitz (1984) and Greif (1993), where the only way to prevent cheating is to pay the cheater contractor an honesty-inducing price for his input so that he finds that cheating pays off is less than the honesty pay-off. The entrepreneurs can come from a traditional producer community or someone coming from outside the community. The latter group can only enter the market if the formal contracting institutions are of sufficiently good quality. For entrepreneurs belonging to the traditional community, the community norm requires them to boycott a contractor who cheated any community member. This makes the cost of cheating someone very high for the contractor, depressing the honesty inducing price of his input supplies. We argue that in the presence of an effective formal system, the capacity to punish declines in the informal system. This is precisely because a strong formal system allows entrepreneurs from outside the community to enter the market who do not abide by the community norm of not hiring a past cheater. This makes it easy for a cheater contractor (who cheated a community member in the past) to find employment with an entrepreneur who does not belong to the community. This increases the honesty inducing price for the input, pushing the entrepreneurs with small capital stock out of the market. Our theoretical model suggests that in areas with strong networks, an honesty inducing input price is low, accommodating small entrepreneurs in the system. But as formal systems improve (and consequently the input price rises), these areas are the worst hit as facing the rising input prices, the entrepreneurs are forced to quit the market. We use the India Human Development Survey (IHDS) 2004-2005 to test our theoretical predictions. In accordance with the theoretical predictions, our empirical evidence suggests that business is affected by the interplay of formal institutions and informal norms. Specifically, when formal institutions are strong enough, we find that the probability of doing business is lower in the presence of a large informal network. Given the cross-sectional nature of the data, we should be careful in interpreting our results as causal. However, this inference does not suggest the preservation of the informal institutions by limiting the power of the formal courts. It rather emphasizes the possibility of jeopardizing the expansion of business by imposing a rapid expansion of the formal contracting system. There is no point in denying that improvement in formal contracting enhances efficiency and social mobility by allowing contractors without any family/community connection to enter the market. However, the preexistence of a strong informal institutional framework, captured by large community networks, makes the rapid institutional switch socially costly as it may exclude people from participating in the market. Most importantly, the exclusion comes from the high cost of accessing the formal institutions. Given the efficiency property of formal institutions, the most logical implication of our research is to reduce the cost of formal contracting. The rest of the paper is organized as follows. Section 2 presents the analytical model, Section 3 outlines the empirical framework, Section 4 summarizes the data used to test the implications of our model, Section 5 reports the empirical findings and, finally, Section 6 concludes. Agents: contractors and entrepreneurs There is a pool of potential entrepreneurs who produce a good G. For producing the good G, they need an input X, which is supplied by a set of contractors who come from a traditional X producing community C. The entrepreneurs however may come from both the traditional community (C) and outside community (NC). The production of the input requires high skill, but only a fraction of the community C has the skill – we call them High type contractors. The rest of the contractors, who we call Low type, do not have the necessary skill to supply the input. Therefore, if a low type is chosen, the entrepreneur makes zero profit. Whether a contractor is High type is common knowledge within the C community but not outside. So when a contractor from C community asks for work, a typical NC entrepreneur cannot tell whether the contractor has the appropriate skill. This however is common knowledge for a C entrepreneur For an NC entrepreneur, the first problem is that of adverse selection – to be able to distinguish between the High and the Low type. However, there is a second level problem as well – the problem of moral hazard. Even after a High type contractor is selected, he may supply bad quality input as it saves effort for the contractor. Note that using a bad quality input for producing G is better than hiring a Low type contractor. A Low type contractor is a fraud who does not have the skill to produce the input even of bad quality. Hence, from an entrepreneur's perspective, a High type supplying bad quality input yields a better outcome than hiring a Low type contractor who cannot supply any input. We write the condition as follows: $$ 0<\kappa < \pi^{B} < \pi^{G}, $$ ((1)) where κ is the reservation income of the entrepreneur. π j is his income by hiring the high type contractor, where the contractor supplies quality j input (j=B,G). If the entrepreneur hires the Low type contractor, he gets 0 profit, which is less than his reservation pay-off. Hence, while choosing a contractor faces two types of problems, the first one is a typical problem of adverse selection, and the second one is of moral hazard. Before proceeding further let us discuss why both the moral hazard and adverse selection problems are necessary for the formulation of our model. The moral hazard and the adverse selection problems bring out the role that two types of institutions – formal and informal – play. Let us first take the case of the moral hazard problem where a contractor can shirk. The only punishment for cheating is not hiring a cheater again. For entrepreneurs belonging to the entrepreneurial caste, if one entrepreneur is cheated, all entrepreneurs belonging to the caste boycott the cheater contractor. This makes the punishment cost of cheating a caste entrepreneur more than cheating a non-caste member. The informal institutions in this case take the form of information flow within the community. This information advantage allows the caste entrepreneurs to impose the punishment on a cheating contractor. This is not possible in the case of non-caste entrepreneurs due to the lack of credible information. Instead, the non-caste members who cannot go to informal institutions solve the moral hazard problem simply by paying a higher honesty-inducing wage than their caste member counterpart. Note that this analysis does not require modeling of formal institutions. The adverse selection problem, on the other hand, arises in the model because, ex-ante, it is not possible to distinguish between the Low and the High type. Here comes the role of formal institutions. The formal institutions of contract enforcement are essentially third party enforcement (court, police etc). They solve this problem by punishing a Low type contractor who poses as the High type contractor. The higher is the quality of the formal institutions, the higher is the probability of catching and punishing a Low type mimicking a High type. Therefore, the quality of formal institutions enters our analysis through the channel of the adverse selection problem, making the adverse selection problem crucial to our theory. Both the moral hazard and adverse selection problems are important in our analysis because they are solved by the informal and formal institutions respectively. The entrepreneurs can be characterized in two dimensions: community identity and endowment. An entrepreneur i is endowed with business skill, or capital, θ i, and the endowment is distributed according to the distribution ϕ. An entrepreneur i's output y i is positively related to his endowment. There is another dimension of any entrepreneur – either he belongs to a traditional business community (C) or does not belong to that community (NC). However, the distributions for θ are the same for C and NC type entrepreneurs. The main difference between C and NC types is in terms of accessing informal institutions. Only C type entrepreneurs can access the informal network for adjudicating any dispute with the contractors. However, both C and NC type entrepreneurs can access the formal court. Note that the quality of an input (good or bad) cannot be verified by the court. Hence, the court is only useful if a Low type contractor misrepresented himself as a High type and took money for supplying the input. The entrepreneur faces two levels of problems. Finding a High type is the first level of the problem. In the second level, the entrepreneur has to ensure that the High type is not behaving opportunistically – i.e., not supplying bad input after being hired. Let us now elaborate the role of institutions in solving the problems faced by the entrepreneurs. There are two types of contracting institutions available in the economy. One is formal courts characterized by third party enforcement and the second is informal networks characterized by reputation based mechanisms. In what follows, we discuss the different roles played by the formal versus informal institutions in solving the problem of asymmetric information faced by the entrepreneurs. Adverse selection problem: the role of court We have already mentioned that there are two types of problems that an entrepreneur faces: Low type posing as High type, and after recruitment, high type supplying bad quality input. From the entrepreneur's point of view, a low type contractor (who can only supply zero input) is worse than hiring a high type who supplies bad quality. There are two ways of catching and punishing a Low type. The informal network of C members possesses the information regarding its member's skillfulness, i.e., everybody in the community knows which member in the community does not have the necessary training to produce X. Hence, no Low type contractor is hired by a C type entrepreneur. In other words, belonging to the community network solves the problem of adverse selection for a C entrepreneur. But NC entrepreneurs cannot access this information about the true type of the contractor, ex-ante. Instead, the NC entrepreneurs can sign a formal contract with a potential contractor, and if he turns out to be the Low type, they can file a lawsuit against the Low type posing as High and get the Low type punished with probability σ, where, σ is the quality of the formal court. Hence, with a sufficiently strong formal court, Low type community members will not pose as a High type member. In general, mimicking the High type is not worthwhile for the Low type if $$ \sigma (P-M) + (1-\sigma)(P)<0, $$ where P is the price that the low type gets by posing as the High type, and M is the penalty he pays if he gets caught. The reservation pay-off of the Low type is 0. The condition tells us that there will be no Low type posing as High type if $$ \sigma > \frac{P}{M} = \sigma^{*} $$ For σ < σ ∗ the quality of formal institutions is so bad that Low types can mimic High types and get away with it. This makes the NC type entrepreneurs find that it is not worthwhile to join the market. For low enough sigma, all Low types mimic as high types, and given that C type entrepreneurs already know who is Low type, there is a very high probability that NC type entrepreneurs are matched with Low type. This leads to our first theorem: Theorem 2.1. For a sufficiently high quality of formal institutions (σ ∗), Low types do not find it worthwhile to mimic the High type, and as a result, NC entrepreneurs enter the market. The moral hazard problem Unlike adverse selection, the moral hazard problem however cannot be solved by any third party as the quality of the input is not verifiable by the third party. Only the entrepreneur can find out the quality of the input, and the punishment she can inflict is not hiring a cheater contractor for subsequent periods. The monetary value of the punishment can be measured by a wage that a contractor loses if he is fired. We follow the efficiency wage theory framework proposed by Shapiro and Stiglitz (1984) and Greif (1993) for analyzing the solution to the moral hazard problem. We start with the case where (σ<σ ∗), and only C type entrepreneurs operate. C entrepreneurs solve the adverse selection problem of selecting the High type through the information network. Hence, they face the moral hazard problem only – the problem of ensuring that High type supplies good quality input. The entrepreneur can solve the problem by offering a payment to the contractor so that the cheating pay-off is less than the honesty pay-off. This section is modeled after Greif (1993). The contractor supplies one unit of the input to the entrepreneur and gets a payment ρ. If he supplies bad quality input, he saves some cost η but at the end of the period gets fired. However, there is an exogenous probability of terminating the contract given by q. In that case, if the contractor is honest, he is hired again. For characterizing the honesty inducing equilibrium, we define the following expressions: The pay-off for an honest agent is given by $$ V_{h} = \rho + \beta(1-q)V_{h} + q{V_{h}^{u}} $$ This shows that the lifetime payoff of an honest contractor can be divided in to current and future pay-offs. In the current period an honest agent gets factor payment ρ. In the next period, however, she might get fired for an exogenous reason with probability q and continue to get \({V_{h}^{u}}\) – the lifetime pay off of an honest unemployed agent. On the other hand, the agent may stay in the job with probability 1−q and continue to earn an honest employed agent's pay-off – V h . The future pay offs are discounted by the discount rate β. By cheating, an agent gets a one time pay-off η in the current period. However, this one time payment comes at the cost of losing his job at the end of the current period. From the next period onwards he gets the pay-off of an unemployed cheater. An unemployed cheater can be rehired with probability p c in the next period and get V h . With probability (1−p c ) a cheater is not re-hired, and he gets reservation wage \((\overline {\omega })\). The pay-off for an unemployed cheater, \({V_{c}^{u}}\), is summarized by the following equation: $$ {V_{c}^{u}} = \beta p_{c} V_{h} + \beta\left(1-p_{c}\right)\left(\overline{\omega} + {V_{c}^{u}}\right) $$ An honest agent can also lose her job for exogenous reasons. However, she may be rehired with probability p h in the next period and get V h . On the other hand, with probability (1−p h ) she may remain unemployed and get \(\left (\overline {\omega } + {V_{h}^{u}}\right)\) – reservation pay-off plus life time utility of an honest unemployed agent. $$ {V_{h}^{u}} = \beta p_{h} V_{h} + \beta\left(1-p_{h}\right)\left(\overline{\omega} + {V_{h}^{u}}\right) $$ The payment to a contractor (ρ) that prevents her from cheating must satisfy the condition $$ V_{h} \geq \eta + {V_{c}^{u}} $$ It is easy to understand that no entrepreneur has any incentive to pay a ρ more than the minimum honesty-inducing payment. A contractor's honesty inducing payment is rising in the probability of rehiring a cheater agent. We provide the formal proof in the appendix. But the intuition of this theorem is quite straight forward. The only punishment an entrepreneur can inflict is firing the agent, which involves the monetary cost of the forgone payment. If the cheater agents can easily be rehired, the cost of losing the current job is low. In that case the input price (that he misses because of getting fired) needs to be big enough to prevent one from cheating. So we have ρ ∗=ρ(p c ),ρ ′>0. We assume that community members will not appoint a contractor who has cheated another community member, i.e., in the environment where all the entrepreneurs are type C, we get p c =0. This is possible due to the flow of information within the community of a C type entrepreneur. However, an NC type contractor cannot access any such information. Hence, she cannot distinguish between an agent who cheated in the past and the one who did not. For her p c =p h >0, this leads to the next corollary: Corollary 2.3. The honesty inducing payment for the contractors hired by NC entrepreneurs is higher than that for the ones hired by the C entrepreneurs. The interaction effect First we analyze how the improvements in the formal court system affects the C type entrepreneurs. The improvements in the formal court system do not directly affect C type entrepreneurs. It affects the C type by facilitating the entry of the NC type. NC type entrepreneurs can only enter the market if the formal institutions are good enough to solve the adverse selection problem. Hence, a good court allows the NC entrepreneurs to enter the market. Once in the market, the NC entrepreneurs solve the moral hazard problem the same way the C entrepreneurs solve the problem, i.e., paying the honesty-inducing price. But the entry of NC entrepreneurs will have an indirect impact on the C entrepreneurs as the equilibrium price for the input will go up, reducing the profit margin of the existing C entrepreneurs. This determines the number of C entrepreneurs. Who are the C entrepreneurs running businesses? The entrepreneurs with endowment θ i will be in business such that $$ \pi_{i}(\theta_{i})\geq\rho^{*}, $$ where ρ ∗ is the equilibrium price for the input. Solving (8) for equality, we get the lowest endowment entrepreneur that can be in the business \(\widetilde {\theta }=\theta (\rho ^{*})\), where \(\widetilde {\theta }\) is rising in ρ ∗. From this we get our next proposition The cut-off endowment level of the entrepreneurs is a function of the input price, and the cut-off goes up as the input price goes up If the equilibrium input price (ρ ∗) goes up, only the entrepreneurs with sufficiently high endowment can stay in the market. As the input price goes up following the entry of the NC type entrepreneurs, the cut-off endowment level goes up. Let us now look at the volume of business following the entry of the NC entrepreneurs. Entry of the NC entrepreneurs increases the number of NC business, but it decreases the number of the C entrepreneurs as the cut-off endowment level gets revised upwards. Hence, theoretically, the net effect is ambiguous, making the empirical investigation important. Suppose the number of possible community entrepreneurs is M c . In period 0 we do not have any NC entrepreneur in the market. So the number of businesses is equal to the probability that a potential C entrepreneur will start a business times M C . Suppose in period 0 the cut off endowment level was θ 0. Hence the total number of businesses is given by $$ B_{0} = M_{C} \times \left(1-\Phi(\theta_{0})\right) $$ In period 1, NC entrepreneurs enter, and as a result, input price goes up, moving the cut-off endowment level to θ 1>θ 0 for both C and NC entrepreneurs as they both face the same input price. Hence, while new entrants (NC entrepreneurs) add to the volume of business, the quitting community entrepreneurs reduce it, making the net effect ambiguous. Assume that the pool of potential NC entrepreneurs is M N . The volume of NC businesses is given by $$ {B^{N}_{1}} = M_{N} \times \left(1-\Phi(\theta_{1})\right) $$ ((10)) The number of community businesses in period 1 is given by $$ {B^{C}_{1}} = M_{C} \times \left(1-\Phi(\theta_{1})\right) $$ Hence, total business in period 1 is given by $$ B_{1} = M_{N} \times \left(1-\Phi\left(\theta_{1}\right)\right) + M_{C} \times \left(1-\Phi\left(\theta_{1}\right)\right) $$ The change in business is given by $$ B_{1} - B_{0} = M_{N} \left(1-\Phi\left(\theta_{1}\right)\right) - M_{C} \left(\Phi\left(\theta_{1}\right) - \Phi\left(\theta_{0}\right)\right) $$ From this we get B 0⪌B 1 according to $$ \frac{M_{N}}{M_{C}} \gtreqless \frac{\left(\Phi\left(\theta_{1}\right) - \Phi\left(\theta_{0}\right)\right.}{\left(1-\Phi\left(\theta_{1}\right)\right)} $$ Theoretically we do not have any clear cut answer as to whether entry of NC entrepreneurs will lead to an increase or decrease in the number of businesses. This depends on the relative size of the pool of C and NC entrepreneurs and the shape of the endowment distribution. The larger is the value of M C compared to M N , the more likely it is that with the improvement in the formal institutions (and consequently entry of the NC entrepreneurs), total number of businesses will fall. This will happen when the new entry will not be sufficient to cover for the exit of the community entrepreneurs. In other words, the entrants come from the upper tail of the endowment distribution, while the quitters come from the lower tail. Hence, total business will shrink if the lower tail is denser than the upper tail. This should be the case for a less developed country characterized by inequality, where the number of people belonging to the upper wealth percentile is less than that in the lower percentile. Next we review the interaction effect between the formal and the informal institutions and its effect on the volume of business. In the previous sections we have assumed that there is one homogeneous community network where the probability of rehiring a cheater is zero. We now extend this set up by introducing heterogeneity in terms of community network. We assume that there are n districts, and each district j is characterized by network size ν j . We further assume that the probability of rehiring a cheater is a falling function of the network size $$ {p^{j}_{c}}= g(\nu_{j}), $$ where g ′<0. This assumption implies that in a district characterized by big network, a large number of people know about one's cheating history, and the cheater finds it difficult to get a job. Let us now elaborate how the improvements in the formal court system affects districts with different degrees of networks differently. In other words, we examine the role of interaction between the existing informal network mechanism and the formal institutions in determining the volume of business. In period 0, the larger the network, lower is the probability for a cheater contractor to be rehired, and lower is the input price. Hence, in a district characterized by a larger network, the cutoff endowment level for the C entrepreneurs is lower than that in a district with a smaller network. This means $$ \theta_{0} = \theta(\nu), $$ where ν represents the size of the network and θ ′(ν)<0. This means that the value of θ 0 is low in high network districts. Given that θ 1 is determined by the cut-off level of the NC entrepreneurs, which has nothing to do with the existing network size, the expression (Φ(θ 1)−Φ(θ 0)) is rising in the network size. Since the size of the quitting businesses is rising in (Φ(θ 1)−Φ(θ 0), we get the following theorem If formal institutions improve, sufficiently allowing the NC entrepreneurs to set up business, the reduction in the community businesses will be higher in high network districts than that in the low network districts. If the negative impact of the quitting C entrepreneurs is strong enough, this will lead to greater reduction in the total volume of business in the districts with higher networks. We next turn to the empirical section to test the implications of our model using data from India. Empirical specification Empirically, a way to test the theoretical predictions would be to estimate a regression of the probability of doing business on the the interaction between formal and informal institutions using panel data. A panel setting would enable us to estimate the effect of introducing formal institutions in an economy with pre-existing informal institutions. However, in the absence of any longitudinal information, we only provide suggestive evidence on our theoretical predictions. Specifically, we compare districts with varying degrees of informal and formal institutions using cross sectional data. In particular, we estimate the following specification: $$ P(SE)_{id} = \beta_{0} + \beta_{1} IN_{d} * FC_{d} + \beta_{2}IN_{d} + \beta_{3}FC_{d}+ X_{id} + \epsilon_{id}, $$ where P(S E) id reflects the probability with which a household i in district d chooses to be self-employed over being wage employed. I N d is a proxy for the quality of informal institutions in district d. F C d is a proxy for the quality of formal institutions in district d. The interaction between I N d and F C d is our main variable of interest. According to theoretical predictions of our model, a positive β 2 would imply that a higher proportion of households choose to do business when the informal network is large, which thereby helps to facilitate information flow within the network. Additionally, β 1 captures the impact of formal institutions on the relationship between informal institutions and self-employment. Specifically, a negative β 1 would imply that when the quality of formal institutions increasess businesses would exit from areas with a greater prevalence of informal institutions. Similarly, β 3 captures the independent effect of the quality of formal institutions on probability of self employment. A positive β 3 implies that as the quality of formal institution improves, businesses would flourish as it enables some new entrepreneurs to enter the market. Note however that informal and formal institutions might evolve endogenously at the district level. One way to deal with this could be to use historical data to capture the introduction of the formal court system.1 However, we cannot adopt this approach due to the paucity of such data. Instead, we try to control for a range of household and district level controls captured in X id . Specifically, we include religion, caste, education, amount of loan taken, any caste-group membership at the household level and availability of formal loans at the district level. We use data from the India Human Development Survey (IHDS) for this study. The IHDS is a nationally representative survey of 41,554 households interviewed in 2004 and 2005 (Desai et al. 2009). Surveyed households are distributed across 382 of India's 602 districts. Our study covers households which are either self-employed, wage employed or unemployed. This leads to a sample size of 34,521 households across 373 districts in our study. For our dependent variable we use the information on employment status of different members of a household to create a household level variable of self-employment status. We define a household to be self employed if at least one member in the household owns a business in the non-agricultural sector. A household is defined to be wage-employed if at least one member is wage-employed, and no one is self-employed. A household is defined to be unemployed if no one in the household is employed.2 Our main variables of interest are informal and formal institutional quality. In equation 17, we proxy informal institutions, I N d , by the fraction of households in a district d that takes loans from informal sources, viz., friends, relatives and community credit groups.3 In general, an informal loan network not only captures the extent of loans available in a district, but it also reflects the close association between members of the network. A larger size of the informal network facilitates flow of information within the network and helps in enforcing the reputation mechanism. The quality of formal institutions is captured by the perceived quality of formal courts of law. We measure F C d as the fraction of households in a district d which perceives the judiciary to be strong. 4 Specifically, the survey asks households to rank different institutions on a scale of one to three, where three signifies the least confidence in a particular institution, and one signifies the highest confidence. We consider the perceived court quality to be strong when a household's ranking of court efficiency is one. Table 1 provides the summary of our estimation sample. About 23% of our full sample is self employed. However, when we disaggregate by sector, we find a much higher prevalence of self employment in the urban sector – about 29% of the sample is self-employed in urban as opposed to 19% in the rural sector. This has implications for the importance of the relationship between self-employment and institutional quality, which we revisit in Section 5.1. When we look at the prevalence of informal networks, we find that about 12% of the full sample has borrowed from informal sources. Moreover, the extent of informality is not very different between urban and rural sectors. With respect to the quality of formal courts, 53% of our full sample perceive the court to be efficient. Once again, the difference in perception is small between urban and rural sectors. The availability of formal loans is higher in rural areas, possibly reflecting the higher prevalence of government rural banks providing agricultural loans. However, as expected, the average size of loans is much higher in urban regions. Table 1 additionally reports the means for the other control variables that we use in our empirical specification. Table 1 Summary Table 2 reports the results from a linear estimation of equation 1. The outcome variable reflects the probability of a household being self-employed compared to being wage employed. Column 1 includes a measure of informality at the district level (I I d ), an indicator for strong institutions(S I d ), and an interaction between the two. Since our variables of interest vary only at the district level, we report clustered standard errors at the district level in all specifications. Table 2 Probability of self employment vs wage employment The results in column 1 indicate a nonlinear relationship between the degree of informality and self employment. The coefficient on informal networks by itself suggests that a greater extent of informality in a district predicts a higher probability of self-employment. However, this relationship depends on the strength of formal institutions. Specifically, the negative coefficient on the interaction term implies a nonlinear relationship. This can be seen from the following equation. $$ \frac{\partial Pr(SE)}{\partial II_{d}} = -\beta_{1} SI_{d} + \beta_{2} $$ Our result implies \(\frac {\partial Pr(SE)}{\partial II_{d}}\gtreqless 0\), according to \(SI_{d} \lesseqgtr \overline {SI_{d}}=\frac {\beta _{1}}{\beta _{2}}\). This means that greater informal networks positively affect the probability of starting a business as long as the quality of the formal court is below a certain threshold. However as the formal institutions become sufficiently strong, a higher number of business would quit in areas with larger informal networks. More specifically the coefficients can be interpreted in the following way. In column 1, the estimates imply that the threshold level of formal institutions is given by 0.54 (β 2/β 1). Hence, districts where more than 54% of the households perceive the judiciary to be strong are above the threshold level of formal institutions. Now consider two districts within this group – one with a low prevalence of informal networks (D LI ) and another with a high prevalence of informal networks (D HI ). Then our coefficients imply that the probability of self employment is lower in D HI compared to D LI .5 Specifically, if 60% of households perceive the court to be efficient, then a one unit difference in the extent of informal networks between D HI and D LI leads to a 10 percentage point lower probability of self employment in D HI compared to D LI districts. Conversely, now consider the situation where the level of formal institutions is less than the threshold level of 0.5. Consider the same two districts, D HI and D LI . Here our coefficients imply that the the probability of self employment is higher in D HI compared to D LI . Specifically, if only 40% of the households in a district perceive the court to be strong, a one unit higher level of informal networks leads to a 10 percentage point higher likelihood of doing business. Next we add a number of control variables to the above basic specification. It is possible that districts with greater availability of informal loans also have a higher availability of any loan so that 'IN' simply captures the extent of the total loan availability in the district instead of the extent of informality. Hence, we control for the availability of formal loans in the district in column 2. The coefficients indicate similar effects of informality. Column 3 includes a proxy for the credit worthiness of a household indicated by the maximum amount of loan taken. Results remain unchanged here as well. The coefficient on the control variable suggests the obvious that if a household has taken a larger amount of loan, it is more likely to start a business. Column 4 controls for other household level characteristics like religion, caste and an indicator whether a household has any literate member. The results are in the same spirit as before. The coefficients on High Caste and Education are as expected. More educated households and traditionally higher castes are more likely to be self employed. Hindu households, on the contrary, are less likely to self employed compared to households from other religious backgrounds. Finally, in column 5 we also control for household participation in a caste network since that might increase both a household's probability of getting a loan and starting a business. Results remain the same as before. Moreover, as expected, household participation in a caste network increases the probability of self-employment. To further verify that our results are not sensitive to varied specifications, we carry out the following robustness check. In Table 3 we include unemployment in the reference category. We now compare the probability of self employment with wage employment and unemployment. The results remain unaffected by the inclusion of unemployment in the reference category. The coefficients on the interaction term and informal institutions show similar nonlinear effects of informal networks on business decision, depending on the extent of formal institutions. Table 3 Probability of self employment vs wage employment & unemployment Heterogeneity analysis In Table 1 we observe that the incidence of self-employment is much higher in urban areas compared to rural areas. Hence, in Table 4, we estimate the relationship separately for rural and urban regions.6 The results discussed in Table 2 are primarily driven by the urban region. Informality by itself or its interaction with formal institutions doesn't play a significant role in predicting self employment for the rural region. It is possible that the link between informality and self employment is more relevant for the urban region which is characterized by a higher prevalence of self-employment. Table 4 Probability of self employment vs wage employment: Rural & Urban In general, it should be easier to start a new business in the presence of a large informal network because it facilitates the flow of information and provides easy access to loans. However, according to our theoretical predictions, the presence of a large informal network would also lead to a greater exodus of businesses when the quality of formal institutions crosses a threshold. To re-examine these possibilities we conduct a heterogeneity analysis by estimating our model separately for the extent of association of households in various caste organizations. In districts where a large fraction of households participate in caste networks, there would be a greater flow of information within the network. Consequently, it would be easier to enforce contracts using the informal system of the reputation mechanism and facilitate business opportunities. We define a district to have a strong caste network if a larger share of households in a district participate in caste organizations.7 Table 5 reports the results from this analysis. A one percentage point increase in the size of informal networks leads to a 0.52 percentage point higher probability of doing business in districts with large caste networks. In comparison, a one percentage point increase in the size of informal networks leads to a 0.32 percentage point higher probability of doing business in districts with small caste networks. Additionally, the coefficient on the interaction term shows that the fall in business opportunities is higher in districts with large caste networks when the quality of the formal court is above a threshold. Table 5 Heterogeneity analysis: network size Finally, we also investigate how a greater influx of NC entrepreneurs affects our baseline relationship. Specifically, we look at the extent of migration into and out of a district as it determines the composition of the pool of entrepreneurs within a district. A higher number of out-of-community entrepreneurs is likely to be present in high-migration compared to low-migration districts.8 Analogously, a higher number of low-endowment community entrepreneurs will operate in low migration districts. In this situation when the formal institutions become sufficiently strong, there would be a greater exit of low-endowment entrepreneurs from the low-migration districts. In other words, since high-migration districts would have a larger number of high-endowment out-of-community entrepreneurs to begin with, the interaction between informal network and formal courts will have a much weaker effect than the low-migration districts, which are populated by low-endowment entrepreneurs. Therefore, we conduct a heterogeneity analysis separately for districts with high and low migration in the urban region as presented in Table 6. We define a district to have high migration if the fraction of migrants in that district is greater than the median. In accordance with our theoretical predictions, the informal network and its interaction with the formal court matters for self-employment in districts with low-migration, possibly due to a greater prevalence of community entrepreneurs. We do not find any significant relationship for the districts with high-migration. Table 6 Heterogeneity analysis: migration The relation between the informal and formal institutions of contract enforcement is usually seen as substitutional – the former being replaced by the latter in the course of economic development. The experiences of developing countries, however, show that these two type of institutions co-exist. Understanding the nature of their interaction therefore becomes crucial for designing optimal institutions. In this paper we model the interaction between these two types of institutions and its effect on the prospect of running a business. We test the implications of the model using household level data for India. The informal institutions of contract enforcement, which works on a reputation based mechanism, are critical for the operations of micro-entrepreneurs who cannot access costly formal institutions for enforcing the contracts. Unlike the formal institutions which depend on the legal system for enforcing contracts, the informal institutions punish a cheater by denying him any future employment. However, such mechanisms are limited to certain communities where members abide by the community norm of not employing one who has cheated someone from that community. The system gets weaker if entrepreneurs start violating this norm. The rise of formal institutions allows non-community members to enter the market who do not follow such norms. This, in our analysis, significantly weakens the effectiveness of the informal institutions. Our theoretical result shows that as long as the quality of formal institutions is below a threshold level, strong informal networks help in business proliferation. However, when the formal institutions get sufficiently strong, they come in the way of the informal ones (following the mechanism we detailed above) and increase the cost of running businesses. This affects the poorer entrepreneurs more adversely than their more well off counterparts because it is the small capitalists who find running business using the formal mechanism not profitable enough. We test our theoretical predictions using IHDS data and find support for our theoretical results. We plan to extend our analysis in the future by constructing a panel using administrative data on court quality. Additionally, future waves of IHDS data would allow us to observe the evolution of informal institutions over time. This will help us to provide more convincing evidence. The main result of our paper apparently warns against the possible backlash of strengthening the formal institutions in a less developed country that is characterized by strong informal institutions. Our position, however, does not endorse maintenance of informal mechanisms. Instead, we emphasize that informal institutions, even though inefficient, are crucial for the micro-entrepreneurs to run their businesses. The implications of our paper are two fold: first, strong formal and strong informal institutions create a negative impact on the probability of doing business; more importantly, such negative impacts typically force the capital poor section of the entrepreneurs to quit the market. Hence, even if the strengthening of formal institutions may lead to efficient outcomes in the long run, it increases inequality in the short run. The main contribution of our paper is to emphasizing this trade off which is often neglected in the institutions-entrepreneurship literature. The policy implication should lead to a better designing of formal institutions so that the exclusion of micro-entrepreneurs can be prevented. 1 One such study is done by Kranton and Swamy (1999) who conduct a descriptive analysis of the effect of court systems on agricultural credit markets using historical data from British India. 2 We also define a separate category as agricultural household if at least one member owns agricultural land or is employed in agriculture and no one is self-employed or wage-employed. However, we did not include this reference category in our analysis because they form a very small fraction of the total number of households. 3 All our district level estimates are computed as a fraction of the total number of households in a district. 4 Note that the formal court quality measure might suffer from measurement error problems as it is based on household perception, implying that our estimates form a lower bound. We plan to collect administrative data related to court quality to construct a more precise measure. 5 For S I d =0.6,P(S E)=β 0+β 1 I I d ∗0.6+β 2 I I d . 6 These results are robust to all the specifications reported in Table 2. However, we only report the specification with full set of controls. 7 Moreover since the baseline relationship is driven primarily by the urban sector, we restrict this analysis to the urban sector. 8 Out-of-community entrepreneurs represent the NC entrepreneurs in our theoretical model. $$ V_{h}\left[1-\beta(1-q)\right] = \rho^{*} + q{V_{h}^{u}} $$ $$ {V_{h}^{u}} = \beta p_{h} V_{h} + \beta (1-p_{h})\left(\overline{w} + {V_{h}^{u}}\right) $$ $$ {V_{c}^{u}} = \beta p_{c} V_{h} + \beta \left(1-p_{c}\right)\left(\overline{w} + {V_{c}^{u}}\right) $$ Define \(T=\frac {1}{1-\beta (1-q)}\). So we have $$ V_{h} = T \rho^{*} + Tq {V_{h}^{u}} $$ Substituting this in equation (8), we get $$ {V_{h}^{u}} = \beta p_{h} \left[\rho^{*}T + Tq{V_{h}^{u}}\right] + \beta\left(1-p_{h}\right){V_{h}^{u}} + \beta(1-p_{h})\overline{w} $$ $$ {V_{h}^{u}}\left[1- \beta p_{h}Tq - \beta\left(1-p_{h}\right)\right] = \beta p_{h}\rho T + \beta\left(1-p_{h}\right)\overline{w} $$ From the last equation, we get $$ {V_{h}^{u}} = \frac{T \rho \beta p_{h}}{\left[1- \beta p_{h}Tq - \beta(1-p_{h})\right]} + \frac{\beta\left(1-p_{h}\right)}{\left[1- \beta p_{h}Tq - \beta\left(1-p_{h}\right)\right]} \overline{w} $$ We can then write the previous expression as $$ {V_{h}^{u}} = \rho T_{1h} + T_{2h}\overline{w} $$ From equation (9) we get $$ {V_{c}^{u}}\left(1- \beta (1-p_{c})\right) = \beta p_{c} V_{h} + \beta\left(1-p_{c}\right)\overline{w} $$ The above expression can be written as $$ {V_{c}^{u}} = T_{1c} V_{h} + T_{2c} \overline{w}, $$ where \(T_{1c} = \frac {\beta p_{c}}{1-\beta (1-p_{c})}\) and \(T_{2c} = \frac {\beta (1-p_{c}}{1-\beta (1-p_{c})}\). The honesty-inducing condition tells us $$ V_{h} - {V_{c}^{u}} \geq \eta $$ Substituting from the previous expressions, we find $$ V_{h} - \left(T_{1c}V_{h} + T_{2c}\overline{w}\right)\geq \eta $$ From equations (11) and (15), we find $$ T \rho^{*} + Tq\left[\rho T_{1h} + T_{2h}\overline{w}\right] \geq \frac{\eta}{1-T_{1c}} + \frac{T_{2c}}{1-T_{1c}} \overline{w} $$ From this we get $$ \rho T[1+TqT_{1h}] \geq \frac{\eta}{1-T_{1c}} + \frac{T_{2c}}{1-T_{1c}} \overline{w} - TqT_{2h}\overline{w} $$ This leads to the condition $$ \rho \geq \frac{1}{T\left[1+TqT_{1h}\right]} \times \left[\frac{\eta}{1-T_{1c}} + \frac{T_{2c}}{1-T_{1c}} \overline{w} - TqT_{2h}\overline{w}\right] $$ Recall that $$ \frac{1}{1-T_{1c}} = 1+\frac{\beta}{1-\beta} p_{c} $$ $$ \frac{T_{2c}}{1-T_{1c}} = \frac{\beta{(1-p_{c})}}{{1-\beta}} $$ Hence, we find, $$ \frac{\partial \rho^{*}}{\partial p_{c}} = \frac{1}{T\left[1+TqT_{1h}\right]} \times \left(\eta - \overline{w}\right)\frac{\beta}{1-\beta} $$ This expression is positive as long as \(\eta - \overline {w}>0\), which tells us that the one time cheating payoff is more than the reservation payoff. This has to be the case because the industry payoff is more than the reservation wage, and the one time cheating payoff is more than the industry payoff. Acemoglu, D, Johnson S (2003) Unbundling Institutions. Technical report, National Bureau of Economic Research. Acemoglu, D, Johnson S, Robinson J (2001) The colonial origins of comparative development. Am Econ Rev 91(5): 1369–1401. Acemoglu, D, Johnson S, Robinson J (2002) Reversal of fortune:geography and institutions in the making of the modern world income distribution. Q J Econ 117(4): 1231–1294. Besley, T, Coate S, Loury G (1993) The economics of rotating savings and credit associations. Am Econ Rev 83(4): 792–810. Biggs, T, Shah MK (2006) African smes, networks, and manufacturing performance. J Bank Finance 30(11): 3043–3066. Biggs, RMT, Srivastava P (2002) Ethnic networks and access to credit: Evidence from the manufacturing sector in kenya. J Econ Behav Organ 49: 473–486. Blanchflower, DG, Levine PB, Zimmerman DJ (2003) Discrimination in the small-business credit market. Rev Economics Stat 85(4): 930–943. Chowdhry, P (2004) Caste panchayats and the policing of marriage in haryana: Enforcing kinship and territorial exogamy. Contrib Indian Sociol 38(1-2): 1–42. Chemin, M (2012) Does court speed shape economic activity? evidence from a court reform in india. J Law Economics and Organization 28(3): 460–485. Dixit, A (2004) Lawlessness and economics: alternative modes of governance. Princeton University Press. Fafchamps, M (2000) Ethnicity and credit in african manufacturing. J Dev Economics 61(1): 205–235. Fafchamps, M (2003) Ethnicity and networks in african trade. Contrib Econ Anal Policy 2(1): 14. Fairlie, RW, Robb AM (2007) Why are black-owned businesses less successful than white-owned businesses? the role of families, inheritances, and business human capital. J Labor Econ 25(2): 289–323. Fisman, RJ (2003) Ethnic ties and the provision of credit: Relationship-level evidence from african firms. B.E. J Econ Anal Policyadvances.3(1): 4. Freitas, K (2006) The indian caste system as a means of contract enforcement. Nortwestern University. unpublished manuscript. Gajigo, O, Foltz JD (2010) Ethnic Networks and Enterprise Credit: The Serahules of The Gambia. Working Paper. Ghatak, M (1991) Group lending, local information and peer selection. J Dev Econ 60(1): 27–50. Ghosh, P, Ray D (1996) Cooperation in community interaction without information flows. Rev Econ Stud 63(3): 491–519. Greif, A, Milgrom P, Weingast B (1994) Coordination, commitment, and enforcement: The case of the merchant guild. J Pol Econ 102(August): 745–776. Greif, A (1993) Contract enforceability and economic institutions in early trade: The maghribi traders' coalition. Am Econ Rev 83(3): 525–548. Harriss-White, B (2010) Globalization, the financial crisis and petty production in indias socially regulated informal economy. Glob Labour J 1(1): 152–177. Kandori, M (1992) Social norms and community enforcement. Rev Econ Stud 59(1): 63–80. Karlan, D (2005) Using experimental economics to measure social capital and predict financial decisions. Am Econ Rev 95(5): 526–556. Keremane, GB, McKay J, Narayanamoorthy A (2006) The decline of innovative local self-governance institutions for water management the case of pani panchayats. Int J Rural Manag 2(1): 107–122. Klapper, L, Laeven L, Rajan R (2006) Entry regulation as a barrier to entrepreneurship. J Financ Econ 82(3): 591–629. Kranton, RE, Swamy AV (1999) The hazards of piecemeal reform: British civil courts and the credit market in colonial india. J Dev Econ 58(1): 1–24. La Porta, R, Shleifer A (2014) Informality and development. J Econ Perspect 28(3): 109–26. Luigi, S, Sapienza P, Zingales L (2004) The role of social capital in financial development. Am Econ Rev 94(3): 526–556. Madsen, ST (1991) Clan, kinship, and panchayat justice among the jats of western uttar pradesh. Anthropos 86: 351–365. McMillan, J, Woodruff C (1999) Interfirm relationships and informal credit in vietnam. Q J Econ 114(4): 1285–1320. doi:10.1162/003355399556278. Nagraj, V (2010) Local and customary forums : Adapting and innovating rules of formal law. Indian J Gender Stud 17(3): 429–450. Rajan, R, Zingales L (1998) Financial dependence and growth. Am Econ Rev 88(5): 559–586. Rosenthal, R, Landau H (1979) A game theoretic analysis of bargaining with reputation. J Math Psychol 20: 235–255. Sandner, V (2003) Myths and laws: changing institutions of indigenous marine resource management in Central America. Springer. Shapiro, C, Stiglitz JE (1984) Equilibrium unemployment as a worker discipline device. The American Economic Review. Slivinski, A, Sussman N (2009) Taxation mechanisms and growth, in medieval paris In: Geneva, European Economic History Association Conference. Straub, S (2005) Informal sector: The credit market channel. J Dev Econ 78(2): 299–321. Yadav, B (2009) Khap panchayats: Stealing freedom?Econ Pol Wkly 44(52): 16–19. 'We thank the seminar participants at Delhi School of Economics, IIT Kanpur, ISI-Calcutta, Indian School of Business, University of Hannover and conference participants at IZA/World Bank Conference on Employment and Development 2014 and CEA 2014 for their useful comments and suggestions. We are also thankful to an anonymous referee for the valuable feedback. We gratefully acknowledge the funding received from ICSSR and IDRC for this project. Responsible editor: David Lam Indian Institute of Technology, Kanpur, India Tanika Chakraborty & Sarani Saha University of Calcutta, Calcutta, India Search for Tanika Chakraborty in: Search for Anirban Mukherjee in: Search for Sarani Saha in: Correspondence to Tanika Chakraborty. The IZA Journal of Labor & Development is committed to the IZA Guiding Principles of Research Integrity. The authors declare that they have observed these principles. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Chakraborty, T., Mukherjee, A. & Saha, S. Court-ship, kinship and business: a study on the interaction between the formal and the informal institutions and its effect on entrepreneurship. IZA J Labor Develop 4, 7 (2015) doi:10.1186/s40175-015-0027-5 Informal network Formal institution
CommonCrawl
Why does using a non-parametric test decrease power? I am thinking about using the Mann Whitney U test over Student's classic t-test. But I was warned that I'd lose power and would require a higher sample size to compensate. I don't understand: Why does using a non-parametric test decrease power? hypothesis-testing nonparametric power amoeba says Reinstate Monica $\begingroup$ Can you add a reference to this? Where did you read about this? $\endgroup$ – Greenparker Mar 27 '16 at 19:29 $\begingroup$ graphpad.com/guides/prism/6/statistics/… $\endgroup$ – user46925 Mar 27 '16 at 19:31 $\begingroup$ see stats.stackexchange.com/questions/163915/… $\endgroup$ – user83346 Mar 28 '16 at 9:30 $\begingroup$ For a more quantitative approach, you can consider the asymptotic relative efficiencies. For instance, Why is the asymptotic relative efficiency of the Wilcoxon test 3/π compared to Student's t-test for normally distributed data? ... But note the ARE will often favour the non-parametric test instead if the data are not drawn from a normal distribution. $\endgroup$ – Silverfish Mar 28 '16 at 10:50 The reason that parametric tests are sometimes more powerful than randomisation and tests based on ranks is that the parametric tests make use of some extra information about the data: the nature of the distribution from which the data are assumed to have come. However, their power advantage is not invariant, as it is often minimal but sometimes they have less power. See pages 96 and onwards of David Colquhoun's old but still golden textbook Lectures on Biostatistics. It is available as a free pdf here: http://www.dcscience.net/Lectures_on_biostatistics-ocr4.pdf Non-parametric tests are usually almost as powerful as parametric tests in the circumstances where the parametric tests are appropriate. However, in circumstances where the parametric test may not be appropriate because its assumptions are too badly violated, the non-parametric test may be more powerful. Michael Lew - reinstate MonicaMichael Lew - reinstate Monica $\begingroup$ +1, but I'll give parametric tests a plug: they often have much more power when dealing with small samples. For example, you couldn't possibly reject the null hypothesis at $\alpha = 0.05$ with a Wilcoxon Signed Rank Test without $n \ge 6$. You can with $n = 2$ using a t-test. And for many bio experiments, $n = 5$ is considered an expensively large sample size! $\endgroup$ – Cliff AB Mar 27 '16 at 22:31 $\begingroup$ @CliffAB Yes, true enough, but you are talking about samples of as few as 5 for a permutations test to be less powerful than a t-test. Are the properties of statistical tests with samples of just 2 useful for inference? Surely you know more from prior and external information than the data can tell you in total. I would be very concerned about inferences based on a P-value from a sample of n=2. $\endgroup$ – Michael Lew - reinstate Monica Mar 28 '16 at 20:13 $\begingroup$ I'm talking about sample sizes as small as 5 for a permutation test to be completely useless. And there are plenty of researchers out there who don't get more than 5 samples! For larger samples, I don't know exactly what the power curve would look like, but I'm guessing it takes awhile before the permutation test catches up to be nearly equivalent when the data is truly normal. Don't get me wrong, I prefer non-parametrics! But there is a very real need for parametric tests as well. $\endgroup$ – Cliff AB Mar 28 '16 at 20:18 $\begingroup$ In regards to $n = 2$, most bio journals require $n \ge 3$, so I assume your faith is restored? I agree that I wouldn't put 100% faith in a conclusion based on 3 samples, but in some studies, that's literally all that is possible. In the (kind of) defense of the bio world, they will usually test, say, 4 different aspects that all inspect their fundamental hypothesis. Obviously we could put more faith in the results if the $n = 100$ instead, but do we make science come to a halt because we cannot afford the sample size to make asymptotic tendencies kick in? $\endgroup$ – Cliff AB Mar 28 '16 at 20:24 $\begingroup$ @Cliff Can you give some more specific examples of situations in biology where such low sample sizes are normal? I work in neuroscience, here a typical experiment will usually have $n$ on the order of magnitude of several dozens if it's rats or, say, batches of fruit flies (and hundreds if it's neurons). When you say that $n=5$ is already expensively large, this is five of what? $\endgroup$ – amoeba says Reinstate Monica Mar 29 '16 at 15:55 I am thinking about using the Mann Whitney U test over Student's classic test. Generally speaking there's a lot to be said for the Mann-Whitney But was warned that I'd lose Power and would require a higher sample size to compensate. It doesn't, generally. In many cases, quite the opposite. If the assumptions of the t-test hold perfectly, and the nonparametric test you use is the Mann-Whitney, then you lose a tiny amount of power$^\dagger$, because the t-test is the most powerful test at the normal under a location-shift alternative. (The t-test uses all the available information in the sample, if the assumptions hold - equal variance, normal distribution, independence, etc. But if you don't have normal distributions, it doesn't; and in many such cases the Mann-Whitney actually makes a more efficient use of the available information) And even if you were exactly at the normal, the power loss is quite small (in large samples, it corresponds to needing 4.7% more observations to get the same power ... less than one in 20). $^\dagger$ (There are other nonparametric tests that don't lose power against the t-test at the normal, but that doesn't mean the Mann-Whitney is a bad choice for a test of location shift, even if you're confident you have a population distribution close to the normal distribution.) [This argument would be like arguing against buying very cheap insurance on the basis that if nobody was ever involved in an accident it would be cheaper.] Do you know that the data were drawn from a normal distribution? Otfen it's possible to tell -- even without looking at the data -- that they can't be (often one can tell simply by knowing that the variable is bounded; if it can't be negative, for example, it can't actually be normal). And if the distribution that the data were drawn from is even a little heavier tailed than the normal, the t-test is likely to be less powerful, not more; and it can be much less powerful (In very small samples other considerations than power come into play and I sometimes argue for a parameteric test then, even though they can be sensitive to assumptions) Glen_b -Reinstate MonicaGlen_b -Reinstate Monica $\begingroup$ thanks glen. but if nonparametric methods rarely lose power, why dont we always use the over parametric ones? $\endgroup$ – user46925 Mar 28 '16 at 11:53 $\begingroup$ I don't think I said rarely; it depends on the situation even if you're comparing MW to t (for example, the MW does lose more power with lighter tailed distributions), but as you phrased it you're going even further and generalizing from a discussion about specific tests to something much more general, and there's no support for that. For example, the power loss of the sign test against the parametric equivalent under normality is not small. You may want to be more specific about the situations in which you're now proposing to always use nonparametric tests. (Isn't this a new question?) $\endgroup$ – Glen_b -Reinstate Monica Mar 28 '16 at 12:06 $\begingroup$ @zero Another reason a person may choose a parametric method is for decision rule parsimony on future data. A simple linear regression with M covariates and N training samples only requires me to store between O(M) and O(M^2) data (the coefficients, intercept, and possibly covariances for standard errors), while most non-parametric models (for example, some kind of locally weighted regression) require me to keep O(N) data (the actual training samples themselves). For cases when N is extremely large, this can be impractical even if the non-parametric method is desirable. $\endgroup$ – ely Mar 28 '16 at 12:55 $\begingroup$ @MrF The discussion here is about a nonparametric model for the distributional form, rather than a nonparametric model for the relationship with another variable or variables (like locally weighted regression). However, you could make a similar point to the one you made, at least in respect of some kinds of nonparametric procedure. $\endgroup$ – Glen_b -Reinstate Monica Mar 29 '16 at 16:02 Why would parametric statistics ever be preferred over nonparametric? Why is the asymptotic relative efficiency of the Wilcoxon test $3/\pi$ compared to Student's t-test for normally distributed data? Parametric sample size calculation and non-parametric analysis What exactly does a non-parametric test accomplish & What do you do with the results? Absolute values — parametric or non-parametric test? How is power for t test and Mann Whitney comparable when null hypotheses differ Non-parametric version of paired t-test (Mann–Whitney U test) Non-parametric tests in big data scenario Non-parametric equivalent of the 1-sample t-test (not paired)
CommonCrawl
asz.ink a cryptolibertarian's hideout On Catastrophic Anthropogenic Global Warming Theory TL;DR. Skepticism is a virtue. But questioning catastrophic anthropogenic global warming theory seems to be a vice. We are told to believe that the debate is settled; there is scientific consensus that humans are the driving cause of global warming. Any skepticism at this point only serves to confuse policymakers and delay urgent political action. To the contrary, I contend that there is just cause for skepticism. Declaring any domain of knowledge as settled science is a characteristic result of the most anti-scientific attitude imaginable. It is the pinnacle of arrogance to claim any knowledge with absolute certainty; an intellectual offense surpassed only by the all-too-often following denouncement of all skepsis as anti-scientific bigotry. In contrast, science is exactly about challenging — and overturning, where appropriate — false beliefs. In the process of scientific investigation, some beliefs may wind up being reinforced rather than overturned, which is why all parties should welcome skepticism with open arms. Our confidence in the truth of scientific propositions comes precisely from our ability to question them. To hold a class of statements or ideas immune to scrutiny is not just akin to religion superficially; it shares its most fundamental tenet, namely the required leap of faith. But a different standard seems to apply in the global warming discussion. The science is settled because there is 97% consensus among scientists who express opinion on the matter. A panel of experts has concluded that humans are the driving cause of climate change with a whopping 95% confidence1. The remaining skeptics serve only to delay urgently needed political action. In fact, since this delay will certainly cause catastrophe, the spreading of unerringly unreasonable doubt constitutes a prelude to crime. The skeptics who cannot muster the good graces to remain silent, should be put in jail in anticipation of their share of the responsibility. Well, I am a persistent skeptic. And at the risk of social ostracism, of being associated with the other type of denier2, even of jail time, let me try to convince you that there is something wrong with this attitude. Starting with the philosophical foundations, and continuing first with the scientific theory and then discussing ideological bias, I think there is plenty to criticize and more than just cause for skepticism. Disclaimer: while I am a scientist, I do not do climate science. My opinion is not that of an expert and if you believe I am factually incorrect somewhere or that you know an adequate response to some of the points I raise, please feel free to leave a comment or drop me an email. I do my best to weed out incorrect beliefs whenever possible. 2. Scientificity Scientificity is the property of a theory that makes it suitable for scientific investigation. Informally, scientificity requires that the world in which the theory is true is different from the world in which it is false, and that by interacting with the world we can increase our confidence in the truth (or falsity) of that theorem. Falsifiability is a necessary requirement: an unfalsifiable explanation is one that is compatible with any phenomenon. Paradoxically, a good explanation is one that is not capable of explaining anything, but rather one that rules out particular phenomena. The notion is linked with information theory: a scientific theory is part of a model of the universe, and by interacting with the universe we extract information from it and incorporate it into the model. An unscientific theory is one that does not contribute to the model for being devoid of information about the world. Scientificity is a binary variable, but falsifying is a process that can be easy or difficult. Among scientific theories there is therefore a spectrum of falsifiability difficulty, matching the effort required to test the theory in question. On the one end of the spectrum there are Newtonian postulates, which can almost be tested from the philosopher's armchair. On the other end there are theories for instance about the inside of a black hole. It is difficult to test these theories, and consequently it is equally difficult to become confident about their truth value. How does this relate to the climate change debate? Well for starters, consensus is immaterial. Assuming the existence of an accurate distinguisher for telling experts from non-experts, assuming the experts are doing science honestly, and assuming they are free of systemic bias, even then their consensus at best correlates positively with the truth value of the theory in question. Also, note that "climate change" is the slippiest possible term to frame the debate with. Climate is by any reasonable definition the average weather over a suitably large time period. If this time period is large enough, then the climate is fixed by definition and any change is impossible. If the time period is short enough to be meaningful, then the average changes with the window over which it is computed, and then climate change is an immediate consequence of weather change. In other words, climate change being true or false is determined by how the term is defined rather than a reflection of the world we live in. Nevertheless, the presentation of the term "climate" as though not scientifically vacuous suggests a description of weather as a random noise signal added to an underlying steady state, which is then purported to be disturbed by human activity. This steady state theory of climate may be appealing to our mammal brains but is nevertheless assumed without justification and indeed, in contradiction to the prevalent creation and destruction theories of the Earth itself. I much prefer the term "global warming" because it, at least, implies one bit of information about the world. If average temperature is observed to drop, then clearly global warming theory is false. However, this term is still troublesome because it captures wildly different scenarios in some of which the average temperature rises by a negligible amount and in others of which the temperature rise is catastrophic to all life on Earth. Phrased in terms of information theory: one bit is not enough. A meaningful debate on global warming must concern theories that explicitly state quantities. By how much will global average temperature increase one year from now? An answer to the same question except "one hundred years from now" is less useful because it is effectively unfalsifiable — especially if the implied political action must be taken before then. To be fair though, the quantity-free discussion is limited to public discourse; scientific reports, such as the IPCC assessment reports, are quite explicit in their quantities, as they should be. Nor is discussion about the human component exempt from the requirement of being quantitative rather than qualitative. The theory "the globe is warming and humans are causing it" implies at most two bits of information about the world and is equally untenable in a meaningful debate. A proper theory must address to which degree humans are causing global warming and even answer hypothetical questions like how much the global average temperature would have risen if humans had not discovered fossil fuels. To its credit, the IPCC assessment report does make a claim that is falsifiable in principle3: an estimation of the temperature increase with and without mitigating action (among other climate-related effects). However, this theory and a complete causation theory are very difficult to falsify, chiefly because of the size of the time span that the predictions cover. To test them properly, we must compare observations far enough apart in time. And unless we go through this laborious and error-prone process of testing them we should remain accordingly unconfident about their truth values. Nevertheless, the argument for causation does not derive from counterfactual data for the whole chain but from evidence supporting the strength of each link individually. While this stepwise approach is a sound methodology, it is easy to lose track of the forest for the trees and take evidence of strength in one link as evidence of strength in the entire chain. The situation is quite the reverse: one weak link can break the chain. 3. The Argument The basic argument in support of anthropogenic global warming goes something like this: Since the industrial revolution, human activity has put tremendous amounts of $\mathrm{CO}_2$ into the atmosphere. $\mathrm{CO}_2$ levels are rising faster than they ever have before. There are quantum-scale simulations, laboratory-scale experiments, as well as city-scale observations indicating that $\mathrm{CO}_2$ behaves like a greenhouse gas, i.e., that it traps heat by absorbing infrared light but not higher wavelength radiation. There is pre-historical evidence that $\mathrm{CO}_2$ correlates positively with global average temperature on geological time scales. More greenhouse gases in the atmosphere means more heat from the sun will be trapped, thus increasing global average temperature. We are currently observing rising global average temperature. The temperature is changing faster than it ever has before. Therefore, human activity is causing this rise in global average temperature. Strictly speaking, the argument is not logically valid, although it does seem convincing somehow. Let us dissect it properly, starting with premise 1. Since the industrial revolution, human activity has put tremendous amounts of $\mathit{CO}_2$ into the atmosphere. $\mathit{CO}_2$ levels are rising faster than they ever have before. It is certainly true that human activity since the industrial revolution has put enormous quantities of carbon dioxide into the atmosphere, as the following graph shows. (source, script) Confused? Well, the graph you might be familiar with uses the same data but rescales the vertical axis to focus on the 0-500 parts per million range. Here it is, for reference. Both graphs describe the same thing: the increase in atmospheric $\mathrm{CO}_2$ content, which is (presumably) driven primarily by human industrial activity. One graph is useful for illustrating the relative increase of $\mathrm{CO}_2$, i.e., 140% increase since 1850. The other graph is useful for illustrating the absolute increase — or properly speaking relative to the level of oxygen, which is after all what the carbon dioxide is displacing when it enters the atmosphere as a result of burning fossil fuels. The point of this graph trickery is that there is no objective way to decide beforehand what does and does not constitute "tremendous" amounts of added CO$_2$, as "tremendous" is a relative term and thus requires a benchmark. The only way to decide whether this 140% increase is massive or insignificant is after determining the effects caused by this increase. Even a 1000% increase can be judged to be mild if the resulting effects are barely noticeable. More generally, the presentation of the premise as saying that the $\mathrm{CO}_2$ increase is tremendous and faster than ever before is a bit of a straw man as neither fact is necessary for the argument to work. This point addresses only the alarming quality that is absent from the minimal formulation, "Human activity has put measurable amounts of $\mathrm{CO}_2$ into the atmosphere." There are quantum-scale simulations, laboratory-scale experiments, as well as city-scale observations indicating that $\mathit{CO}_2$ behaves like a greenhouse gas, i.e., that it traps heat by absorbing infrared light but not higher wavelength radiation. I do not dispute the individual statements made in the premise. Nevertheless I would like to point out that the implied conclusion, that $\mathrm{CO}_2$ is a greenhouse gas on a global scale, does not follow. $\mathrm{CO}_2$ has many subtle interactions with the complex biological and geological ecosystems that it is a part of. The focus on the heat-trapping properties and effects of the gas in isolation ignores the many terms of the equation that are introduced by its interaction with its environment. For instance, according to current understanding, both the ocean and vegetation on land absorb $\mathrm{CO}_2$. How does this absorption affect trapped heat? It is easy to come up with plausible mechanisms by which the carbon dioxide based heat trap is exacerbated or ameliorated, and whose truth cannot be determined beyond a shadow of a doubt from laboratory experiments alone. There is pre-historical evidence that $\mathit{CO}_2$ correlates positively with global average temperature on geological time scales. Indeed there is. A clever piece of science in action — several pieces, actually — have enabled researchers to dig into polar ice and determine the temperature as well as carbon dioxide levels of the past, ranging from yesterday to 400 000 000 years ago. The near exact match spurred Al Gore to mock in his film about truth, "did they ever fit together?", as though the point is contentious. Here's a graph. They do fit together. (source 1, 2, script) It has become a cliché to point out that correlation does not equal causation, but unfortunately that task remains necessary. In fact, there is a very good reason why the "$\mathrm{CO}_2$-causes-temperature to rise and fall" theory is a particularly uncompelling one in light of this data. It is because the $\mathrm{CO}_2$ graph lags behind the temperature graph by 900 years or so, as the next cross-correlation plot shows4 (same script). That time lag is short enough to be unnoticeable on the 400 million year scale, but long enough to seriously reconsider the theory. The position that the pre-historical temperature and $\mathrm{CO}_2$ record supports the $\mathrm{CO}_2$-causes-temperature theory, is untenable. If anything, it supports the opposite temperature-causes-$\mathrm{CO}_2$ theory. Despite the minor inconvenience of having a time mismatch, the $\mathrm{CO}_2$-causes-temperature theory survives. The adapted argument is that $\mathrm{CO}_2$ level and temperature are both causes and effects, locked in a positive feedback loop. Temperature affects $\mathrm{CO}_2$ but $\mathrm{CO}_2$ also affects temperature. The mockery5 continues on skepticalscience.com: "Chickens do not lay eggs, because they have been observed to hatch from them." A positive feedback loop is possible but it would be highly surprising. Very few natural processes actually have positive feedback loops. It is the same mechanics as that of bombs: the combustion rate increases the pressure, which then increases the combustion rate. When positive feedback loops do occur in nature, for example in population dynamics, their extent tends to be limited due to natural stabilizing mechanisms such as disease or predators. What is more likely to be the case, is that $\mathrm{CO}_2$ and temperature are in a negative feedback loop. $\mathrm{CO}_2$ causes temperature to rise (or fall), and the resulting increase (decrease) in temperature affects $\mathrm{CO}_2$ in turn, but the magnitude of these effects dies out exponentially. In this scenario, it would be true that some amount $x$ of added $\mathrm{CO}_2$ would cause another amount $y$ of temperature increase; but an additional $x$ of $\mathrm{CO}_2$ would cause strictly less than $y$ temperature increase. The two are in an equilibrium which can be moved by a perturbation in either input; but the system responds by making another shift of the equilibrium that much more difficult. In chemical systems, this mechanism is known as Le Chatelier's principle. The point of this discussion is not to debunk the $\mathrm{CO}_2$-causes-temperature theory; Le Chatelier's principle probably is at work and if so, the $\mathrm{CO}_2$-causes-temperature theory is true in a literal sense despite being dominated by the temperature-causes-$\mathrm{CO}_2$ component. The point is rather that this theory is firstly not supported by the ice core data and secondly limited as a tool for prediction — as it certainly flies in the face of catastrophic predictions. Nevertheless, the bomb analogy, though it is erroneous, is certainly a far more effective tool of communication; and presumably that is the reason why skepticalscience.com features a Hiroshima counter so prominently on its sidebar. Criticism of delivery aside, it is worth addressing a common counter-argument to the $\mathrm{CO}_2$ lag objection, namely that "about 90% of the global warming followed the $\mathrm{CO}_2$ increase." If that argument is valid, then surely so is, "100% of the $\mathrm{CO}_2$ increase followed the global warming." No counter-argument here beyond criticizing the triviality of this truism. Trapping solar heat is precisely the defining property of greenhouse gases. As raised in the counter-argument to premise 2, the question is whether $\mathrm{CO}_2$ satisfies this definition, something that cannot be inferred from small-scale experiments alone. Unless you distrust the data, the observation that we are currently observing a warming trend is quite inescapable. A simple web search churned out the following surface air, surface sea, and satellite data. (source 1, 2, 3, script) However, the second part of the premise follows neither from the first part nor from the plot and is in need of a citation. In fact, the plot illustrates precisely why the proposition "the rate of change is faster than ever" is very difficult to defend. For starters, the "rate of change" is not explicitly present in the data. One has to infer it by dividing the temperature difference by the time difference associated with two samples. While rate or velocity in physics is defined as the limit of this fraction for time difference going to zero, computing this ratio from the data inevitably magnifies the measurement noise. The best you can do is consider trend lines over a time interval large enough to average out the noise. Make the interval too short, and your trendline reflects only noise; but make it too large and any human effect will be averaged out with the noise. There is no objectively right way to choose interval size. This paradox suggests the philosophical point that maybe "rate of change" is not the right way to think about temperature change to begin with. The notion is not practically usable and really reflects an unjustified ontological presupposition: the "rate of temperature change" exists, has a definite value, and produces different measurement outcomes depending on this value. Nevertheless, this ontological criticism merely reflects the difficulty of defining in precise terms what the current "global temperature" is. Second, the data reaches back only so far, and is less fine-grained the further back you go. We only have satellite data from 1980 onward; weather balloons since the end of the 19th Century; thermometers since the 1720's. However, a compelling argument for the temperature change being faster than ever should present an accurate temperature graph spanning a large stretch of history. The requisite data simply does not exist. There are secondary, indirect records that do reach further than the invention of the thermometer. Clever scientists have attempted to reconstruct earlier temperatures from ice cores (i.e., isotope ratios of air bubbles trapped in ice), tree rings, lake and ocean sediments, and so on. However, these proxy sources require careful modeling and calibration of whatever natural process records the temperature or temperature change and stores this information until readout. The accuracy of this data is limited by the degree to which the models approximate these natural processes, which is by nature very difficult to falsify. Moreover, the further back in time you go, the coarser the data becomes. The question then arises, to which degree can the sharpness of the observed recent temperature rise be explained as the by-product of the resolution mismatch? Joining together recent thermometer data with biological or geological proxy data to produce a graph resembling a hockey stick, suggests that the data's accuracy is roughly uniform across time. The addition of error bars or an uncertainty range reflects the graph makers' optimism that the inaccuracy is quantifiable to begin with. Even if you accept all the premises and pretend that the conclusion follows syllogistically from them, it is worth noting how little this conclusion actually says. Human activity is contributing to a global temperature increase. It does not follow that this temperature increase is catastrophic, will raise sea levels, will kill polar bears, will cause crop failures, draughts, monsoons, or even effect the weather one iota beyond the slight temperature change. All these projected secondary effects require separate arguments that can be questioned independently of the first. And yet by some sort of unspoken agreement, the entire disagreement about global warming centers around the first argument. The motte-and-bailey technique of sophistry comes to mind to describe the situation: by only defending a small but more defensible premise, the alarmist can get away with claiming a far larger conclusion. The catastrophic effects is only one implied consequence of the anthropogenic global warming argument. The second implied conclusion is the mandate for political action. However, even if we accept a list of projected catastrophes as being the consequence of burning fossil fuels, then we must still weigh the benefits against the drawbacks before such a political mandate can be justifiable6. The obvious and under-emphasized benefit of fossil fuels is its enabling of an industrialization and market economy that has lifted more people from the clutches of grinding poverty than any other object or idea throughout human history. This achievement puts the discovery of fossil fuels on par with extinction level events in terms of importance on geological time scales. If global warming does not pose a threat of equal or greater magnitude, and if its solution does not present an alternative to fossil fuels that enables us to maintain our present living standards, who is to say climate neutrality is preferable to affluence? An affluent society might be better off dealing with the consequences of climate change than an impoverished one dealing with adverse weather conditions. 4. Political Bias 4.1 Libertarianism People often accuse me of mixing ideology with reason, in particular with respect to the man-made climate change debate. Being a libertarian, I favor free markets and condemn government intervention. The scientific consensus on global warming unequivocally mandates immediate political intervention, and as such conflicts with my libertarian views. Rather than re-evaluating my political ideology, I choose to ignore the scientific facts. Or so the accusers would have me believe. This accusation fails to convince because it is predicated on a misconception of libertarian legal theory. In fact, assuming the correctness of catastrophic anthropogenic climate change, there is a justification for political action7 in response to carbon emissions. And reviewing this libertarian basis presents an opportunity for providing much-needed focus for the scientific questions whose answers are supposed to justify violence. In the libertarian tradition, violence is only justifiable in response to prior aggression, on behalf of the victim and against the perpetrator of said aggression. The purpose of this violent response is not retribution but restitution: to reinstate the state of affairs prior to the aggression; to make the aftermath independent of whether the initial aggression had or had not taken place. Also, the crime and responsive violence are committed by individuals; not by communities or societies. No group of individuals has rights that the individuals that make up the group do not, and therefore class guilt and class indemnification (that does not reduce to individual guilt and indemnification) are intolerable. The victim must be able to demonstrate ownership of the object of aggression as well as deleterious effects on said object. Lastly, the chain of cause and effect from aggressor to deleterious effect on the object must be demonstrably true. In the context of global warming, the object of aggression might be crops, a house on the shore, or snowy ski slopes, just to name a few. The deleterious effects might be draught, a sea level rise or a hurricane, melting snow. The aggressor would be every individual who emits carbon dioxide; and they would be responsible in proportion to the quantity of their emissions. The task of the global warming scientist would be to demonstrate the chain of cause and effect from $\mathrm{CO}_2$ emissions to the aforementioned deleterious effects. Alleged perpetrator and self-declared victim would present their cases before an independent arbitrator and all parties would be allowed to question the opposition's arguments and evidence. The validity of the scientific argument would not be decided, as it is now, in the court of public opinion; but rather in the court of a professional judge, or group of judges, who stand to lose both reputation and livelihood if their ruling is not convincing and fair. Moreover, if this ruling is later overturned on the grounds of a bad court procedure or ignored evidence, this group of judges gains complicity in the perpetuation of the crime. In contrast, in today's society, the validity of the scientific argument is decided by popular vote, and altogether by individuals who do not personally suffer the consequences of an incorrect determination. I would be remiss to neglect modifying the accusation at the onset of this section in a way that does make sense. As a proper libertarian, I recognize bullshit arguments for state expansion for what they are. With this perspective in mind, what is the likelihood that I would be overzealous in my determination of logical fallacies in an argument that is easy to misconstrue as supporting state expansion? My honest answer is "quite high". However, I write this essay while consciously aware of this answer, and I invite all corrections mandated by an honest pursuit of truth. 4.2. Other Ideologies If the determination of validity of a complex scientific theory is a probabilistic process, then one would expect the proportion of believers to non-believers to reflect the strength of the argument. If the determinators and propagators of the scientific consensus reports are correct, then this consensus itself constitutes a strong argument in favor of the theory's validity. This argument fails, however, if the determination of validity is systematically biased. If it is possible for my political ideology to influence my determination of validity, then certainly people of other political ideologies are not exempt, especially when they are not consciously aware of their biases. Anti-market bias is a documented phenomenon even among people who do not openly advocate the abolition of capitalism. Individuals with this bias will be predisposed to the belief that industrial activity is harmful, and that government intervention is necessary to curb its harmful effects. It goes without saying that the opposite is true: industry is overwhelmingly positive for human well-being and in those cases where it is harmful, the adverse effects of government mitigation inevitably outweigh the benefits. Mother Nature is a human archetype present in a wide array of cultures throughout history. The pre-historic human tribes who were predisposed to the belief in the spirit of the environment no doubt had an evolutionary advantage over those who were not. Early humans who avoided over-hunting or over-burning because that would anger the Earth-goddess were inadvertently improving the conditions for life in their environment, thus benefiting them indirectly because they were in symbiosis with it. Modern-day environmentalists likely tap in to this archetype, even when they don't explicitly worship Gaia. Nevertheless, the existence of this bias does not void all environmental concerns. Sponsorship Bias is the tendency for science to support the agenda of whoever paid for it. It is the reason why published papers are required to disclose its funding sources, so that critics know when to receive strong claims with more or fewer grains of salt. Sponsorship bias is frequently attributed to institutes that are critical of global warming alarmism such as the Heartland Institute and Prager University, owing to their funding coming in part from oil companies. However, the same argument works in reverse as well: if being funded by oil money skews the science towards supporting oil interests, then so does being funded by government programs skew the science towards supporting government programs. Some time before leveling the accusation of ideological investment, people ask whether I believe in global warming, usually with negated-question grammar. I find the question particularly telling because the choice of words highlights the distinction between religion and science. On the one hand, "belief" refers to an unconscious worldview or a conscious leap of faith. On the other hand, a "belief" can be a working model of the universe and source for testable hypotheses. Only the second sense is meaningful8 in a scientific context. The catastrophic anthropogenic global warming theory is just that — a theory, looking to be falsified or borne out by the evidence. An admittedly cursory examination of this evidence raises more questions than it answers, and chiefly of an epistemological nature. Rather than welcoming the skepticism and criticism as a means with which to refine the science, the alarmists dismiss the skeptics and critics as dangerous ideologues and their arguments as religious superstition. However, it is precisely the orthodoxy, the irrational confidence of believers, the call to sacrifice, the state sponsorship, the demonization of heretics and the call to jail them, that have more in common with the ugly end of religion than with open discussion and an honest search for truth. Climate change is not a question for religion. Let's return to science. Presumably, this means that in 95% of Earths with the same conditions as the one we inhabit, the causation theory was found to be true. Boy, am I sure that choice of words was merely coincidental! A point of criticism is due here: the $\mathrm{CO}_2$ concentrations with and without mitigation are far apart (421 ppm versus 936 ppm by the year 2100) and so are the projected temperature increases (0.2 C versus 4.0 C). Between the wide ranges of causes and effects, very few mechanisms are actually ruled out. This theory is unlikely to be falsified even if it is false. We should instead prefer theories that are virtually certain to be falsified if they are false. Source: https://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_SPM_FINAL.pdf, pages 21 and 29. The graph reaches its optimum at around -900 years, meaning that the optimal fit is found by shifting the $\mathrm{CO}_2$ graph by -900 years. In other words, $\mathrm{CO}_2$ lags temperature by around 900 years. To be clear: I do think mockery is fair game. However, it is not a substitute for an argument. Even if you assume that the state has any legitimacy to begin with. Political action refers to organized violence, not necessarily in the form of a state. The first sense can be meaningful in a metaphysical discussion. For now, my conscious leap of faith is in the soundness of the logical-scientific worldview — which I hope, dear reader, you share, because otherwise the entire discussion is moot. Author alanPosted on March 22, 2018 June 29, 2020 Categories climate change, science, skepticismTags agw, anthropogenic global warming, anthropogenic global warming theory, carbon dioxide, climate change, CO2, global warming, science, skepticism One thought on "On Catastrophic Anthropogenic Global Warming Theory" Dragoș says: Could not resist tackling your first point where you mention that "Human activity has put measurable amounts of \mathrm{CO}_2 into the atmosphere.". Indeed, it is hard to tell from just 200 years a data whether an increase of carbon is damaging to the earth. Fortunately, it is easy to tell whether this was because of humans or not…using science! If you read this ( http://www.realclimate.org/index.php/archives/2004/12/how-do-we-know-that-recent-cosub2sub-increases-are-due-to-human-activities-updated/ ) you will see that there is a clever way to measure that the CO2 increase is because of fossil fuels. The gist of it is that CO2 coming from fossil fuel has a lower radioactive level than the CO2 produced by non-humans since fossils are veeeery old. This technique is also called as radiocarbon dating… I do agree with you that if 95% of people believe that global warming is happening that should not serve as a proof. But if those people are scientists and experts in that field I would be more skeptical of me if the opinion I hold differs from theirs. Furthermore, I think there was a big incentive at the time these theories were shaping to be that scientist who debunks global warming, yet now they almost all believe this is happening. Absence of evidence should not serve as evidence of absence but there is just so much evidence (IMO) that it does happen because of us, humans. Previous Previous post: Spooky Security at a Distance: The Controversy of Quantum Key Distribution* Next Next post: On Nonlinear Noisy Key Agreement In Defense of Money Privacy Jacobians of Hyperelliptic Curves On Nonlinear Noisy Key Agreement Spooky Security at a Distance: The Controversy of Quantum Key Distribution* Adomas Baliuka on Spooky Security at a Distance: The Controversy of Quantum Key Distribution* fea ancalima on Formal Ethics, Provable Justice alan on On Nonlinear Noisy Key Agreement Davidw on On Nonlinear Noisy Key Agreement Dragoș on On Catastrophic Anthropogenic Global Warming Theory key agreement noisy key agreement asz.ink Proudly powered by WordPress
CommonCrawl
Computing discrete logarithms in cryptographically-interesting characteristic-three finite fields Efficient decoding of interleaved subspace and Gabidulin codes beyond their unique decoding radius using Gröbner bases November 2018, 12(4): 761-772. doi: 10.3934/amc.2018045 Higher weights and near-MDR codes over chain rings Zihui Liu 1,, and Dajian Liao 2, Department of Mathematics, Beijing Institute of Technology, Beijing Key Laboratory on MCAACI, Beijing 100081, China College of Science, Huaihai Institute of Technology, Lianyungang 222005, China * Corresponding author: [email protected] Received February 2018 Revised March 2018 Published September 2018 The matrix description of a near-MDR code is given, and some judging criterions are presented for near-MDR codes. We also give the weight distribution of a near-MDR code and the applications of a near-MDR code to secret sharing schemes. Furthermore, we will introduce the chain condition for free codes over finite chain rings, and then present a formula for computing higher weights of tensor product of free codes satisfying the chain condition. We will also find a chain for any near-MDR code, and thus show that any near-MDR code satisfies the chain condition. Keywords: Generalized Hamming weight, near-MDR code, secret sharing scheme, chain condition, tensor product. Mathematics Subject Classification: Primary: 94B05; Secondary: 94B99. Citation: Zihui Liu, Dajian Liao. Higher weights and near-MDR codes over chain rings. Advances in Mathematics of Communications, 2018, 12 (4) : 761-772. doi: 10.3934/amc.2018045 T. Britz, T. Johnsen and J. Martin, Chains, demi-matroids, and profiles, IEEE Trans. Inform. Theory, 60 (2014), 986-991. doi: 10.1109/TIT.2013.2292524. Google Scholar S. Dodunekov and I. Landgev, On near-MDS codes, Journal of Geometry, 54 (1995), 30-43. doi: 10.1007/BF01222850. Google Scholar S. T. Dougherty, S. Han and H. Liu, Higher weights for codes over rings, Applicable Algebra in Engineering Communication & Computing, 22 (2011), 113-135. Google Scholar H. Horimoto and K. Shiromoto, On generalized Hamming weights for codes over finite chain rings, Proceedings of the 14th International Symposium on Applied Algebra, Algebraic Algorithms and Error-Correcting Codes, Melbourne: Lecture Notes in Comput. Sci., 2227 (2001), 141-150. Google Scholar G. P. Jian, R. Q. Feng and H. F. Wu, Generalized Hamming weights of three classes of linear codes, Finite Fields and Their Applications, 45 (2017), 341-354. doi: 10.1016/j.ffa.2017.01.001. Google Scholar Z. H. Liu and W. D. Chen, The chain condition of a kind of code of small defects, Mathematics in Practice & Theory (in Chinese), 36 (2006), 314-319. Google Scholar F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes, North Holland, Amsterdam, 1977.Google Scholar C. Martínez-pérez and W. Willems, On the weight hierarchy of product codes, Designs, Codes and Cryptography, 33 (2004), 95-108. Google Scholar B. R. Mcdonald, Linear Algebra Over Commutative Rings (Monographs and Textbooks in Pure and Applied Mathematics, 87), Marcel Dekker, 1984. Google Scholar G. H. Norton and A. Sǎlǎgean, On the Hamming distance of linear codes over a finite chain ring, IEEE Trans. Inform. Theory, 46 (2000), 1060-1067. doi: 10.1109/18.841186. Google Scholar G. H. Norton and A. Sǎlǎgean, On the structure of linear and cyclic codes over a finite chain ring, Applicable Algebra in Engineering Communication & Computing, 10 (2000), 489-506. doi: 10.1007/PL00012382. Google Scholar M. E. Oued, On MDR codes over a finite ring, International Journal of Information and Coding Theory, 3 (2015), 107-119. doi: 10.1504/IJICOT.2015.072612. Google Scholar J. Pieprzyk et al, Ideal Threshold Schemes from MDS Codes, Lecture Notes in Computer Science, Springer Berlin, Heidelberg, 2003.Google Scholar A. Shamir, How to share a secret, Communications of the ACM, 22 (1979), 612-613. doi: 10.1145/359168.359176. Google Scholar V. K. Wei, Generalized Hamming weights for linear codes, IEEE Trans. Inform. Theory, 37 (1991), 1412-1418. doi: 10.1109/18.133259. Google Scholar V. K. Wei and K. Yang, On the generalized Hamming weights of product codes, IEEE Trans. Inform. Theory, 39 (1993), 1709-1713. doi: 10.1109/18.259662. Google Scholar J. A. Wood, Duality for modules over finite rings and applications to coding theory, Amer. J. Math., 121 (1999), 555-575. doi: 10.1353/ajm.1999.0024. Google Scholar M. H. Yang, J. Li, K. Q. Feng and D. D. Lin, Generalized Hamming weights of irreducible cyclic codes, IEEE Trans. Inform. Theory, 61 (2015), 4905-4913. doi: 10.1109/TIT.2015.2444013. Google Scholar T. S. Zhou, F. Wang, Y. Xin, S. S. Luo, S. H. Qing and Y. X. Yang, A secret sharing scheme based on Near-MDS codes, IEEE International Conference on Network Infrastructure & Digital Content, Beijing, (2009), 833-836Google Scholar Bagher Bagherpour, Shahrooz Janbaz, Ali Zaghian. Optimal information ratio of secret sharing schemes on Dutch windmill graphs. Advances in Mathematics of Communications, 2019, 13 (1) : 89-99. doi: 10.3934/amc.2019005 Juliang Zhang, Jian Chen. Information sharing in a make-to-stock supply chain. Journal of Industrial & Management Optimization, 2014, 10 (4) : 1169-1189. doi: 10.3934/jimo.2014.10.1169 Stefka Bouyuklieva, Zlatko Varbanov. Some connections between self-dual codes, combinatorial designs and secret sharing schemes. Advances in Mathematics of Communications, 2011, 5 (2) : 191-198. doi: 10.3934/amc.2011.5.191 Ryutaroh Matsumoto. Strongly secure quantum ramp secret sharing constructed from algebraic curves over finite fields. Advances in Mathematics of Communications, 2019, 13 (1) : 1-10. doi: 10.3934/amc.2019001 Sergio R. López-Permouth, Steve Szabo. On the Hamming weight of repeated root cyclic and negacyclic codes over Galois rings. Advances in Mathematics of Communications, 2009, 3 (4) : 409-420. doi: 10.3934/amc.2009.3.409 Nicolas Van Goethem. The Frank tensor as a boundary condition in intrinsic linearized elasticity. Journal of Geometric Mechanics, 2016, 8 (4) : 391-411. doi: 10.3934/jgm.2016013 H. M. Hastings, S. Silberger, M. T. Weiss, Y. Wu. A twisted tensor product on symbolic dynamical systems and the Ashley's problem. Discrete & Continuous Dynamical Systems - A, 2003, 9 (3) : 549-558. doi: 10.3934/dcds.2003.9.549 David Keyes. $\mathbb F_p$-codes, theta functions and the Hamming weight MacWilliams identity. Advances in Mathematics of Communications, 2012, 6 (4) : 401-418. doi: 10.3934/amc.2012.6.401 Masaaki Harada, Ethan Novak, Vladimir D. Tonchev. The weight distribution of the self-dual $[128,64]$ polarity design code. Advances in Mathematics of Communications, 2016, 10 (3) : 643-648. doi: 10.3934/amc.2016032 Denis S. Krotov, Patric R. J. Östergård, Olli Pottonen. Non-existence of a ternary constant weight $(16,5,15;2048)$ diameter perfect code. Advances in Mathematics of Communications, 2016, 10 (2) : 393-399. doi: 10.3934/amc.2016013 Alonso sepúlveda Castellanos. Generalized Hamming weights of codes over the $\mathcal{GH}$ curve. Advances in Mathematics of Communications, 2017, 11 (1) : 115-122. doi: 10.3934/amc.2017006 Jinghong Liu, Yinsuo Jia. Gradient superconvergence post-processing of the tensor-product quadratic pentahedral finite element. Discrete & Continuous Dynamical Systems - B, 2015, 20 (2) : 495-504. doi: 10.3934/dcdsb.2015.20.495 Michel H. Geoffroy, Alain Piétrus. Regularity properties of a cubically convergent scheme for generalized equations. Communications on Pure & Applied Analysis, 2007, 6 (4) : 983-996. doi: 10.3934/cpaa.2007.6.983 Fang Chen, Ning Gao, Yao- Lin Jiang. On product-type generalized block AOR method for augmented linear systems. Numerical Algebra, Control & Optimization, 2012, 2 (4) : 797-809. doi: 10.3934/naco.2012.2.797 Olav Geil, Stefano Martin. Relative generalized Hamming weights of q-ary Reed-Muller codes. Advances in Mathematics of Communications, 2017, 11 (3) : 503-531. doi: 10.3934/amc.2017041 Teresa Faria, Eduardo Liz, José J. Oliveira, Sergei Trofimchuk. On a generalized Yorke condition for scalar delayed population models. Discrete & Continuous Dynamical Systems - A, 2005, 12 (3) : 481-500. doi: 10.3934/dcds.2005.12.481 Bin Dan, Huali Gao, Yang Zhang, Ru Liu, Songxuan Ma. Integrated order acceptance and scheduling decision making in product service supply chain with hard time windows constraints. Journal of Industrial & Management Optimization, 2018, 14 (1) : 165-182. doi: 10.3934/jimo.2017041 Weidong Zhao, Yang Li, Guannan Zhang. A generalized $\theta$-scheme for solving backward stochastic differential equations. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1585-1603. doi: 10.3934/dcdsb.2012.17.1585 Boling Guo, Haiyang Huang. Smooth solution of the generalized system of ferro-magnetic chain. Discrete & Continuous Dynamical Systems - A, 1999, 5 (4) : 729-740. doi: 10.3934/dcds.1999.5.729 Lijun Wei, Xiang Zhang. Limit cycle bifurcations near generalized homoclinic loop in piecewise smooth differential systems. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2803-2825. doi: 10.3934/dcds.2016.36.2803 Zihui Liu Dajian Liao
CommonCrawl
Clinical Phytoscience International Journal of Phytomedicine and Phytotherapy Original contribution Possible anti-diabetic potentials of Annona muricata (soursop): inhibition of α-amylase and α-glucosidase activities Kinsgley Chukwunonso Agu1, Nkeiruka Eluehike ORCID: orcid.org/0000-0001-8137-831X1, Reuben Oseikhumen Ofeimun1, Deborah Abile2, Godwin Ideho1, Marianna Olukemi Ogedengbe1, Priscilla Omozokpea Onose1 & Olusola Olalekan Elekofehinti3 Clinical Phytoscience volume 5, Article number: 21 (2019) Cite this article Annona muricata has been used in folklore in the management of diabetes. A major strategy in decreasing postprandial hyperglycemia in diabetes involves the inhibition of carbohydrate-hydrolyzing enzymes - α-amylase and α-glucosidase. Thus, this study evaluated the in vivo and in vitro inhibitory potentials of the different parts (fruit-pulp, leaf, stem-bark and root-bark) of Annona muricata. A total of 120 Wistar rats were treated with methanol extracts for 28 days after which blood and tissue samples were collected for α-amylase assay. In vitro inhibitory properties of methanol, ethyl acetate and dichloromethane extracts of the various parts of the plant on α-amylase and α-glucosidase activities were performed using standard procedures. The mode and mechanism of interactions between the enzymes and extracts (and isolated acetogenin) were determined using various kinetic interpolations and in silico experiments. The fruit-pulp and root-bark methanolic extracts better -inhibited plasma and tissue amylase in vivo. The in vitro studies revealed that the stem-bark methanolic, fruit-pulp ethyl acetate, and leaf dichloromethane extracts, better inhibited α-amylase activity compared with the standard acarbose. Also, the leaf methanol, fruit-pulp ethyl acetate, and root-bark dichloromethane extract better inhibited α-glucosidase activity. These observations were corroborated with their higher Bmax and Vmax and lower Kd values. All the extracts exhibited an "uncompetitive" type of inhibition pattern. Also, the isolated acetogenin (15-acetyl guanacone) from the fruit-pulp showed a better binding affinity compared to the standard drug, Metformin. Better natural remedy for diabetics can be obtained from Annona muricata with minimal or no adverse side effects. Diabetes mellitus is a chronic endocrine disorder of carbohydrate, fat and protein metabolism characterized by an increase in both fasting and postprandial glucose level and it has been reported to be the major cause of mortality worldwide. There is an alarming projection of 471 million people with the disease by the year 2035 [1]. Diabetes is grouped into two forms; type 1, insulin dependent diabetes mellitus and type 2, non- insulin dependent diabetes mellitus. Type 2 is the major form accounting about 90% of cases worldwide [2]. In type 2 diabetes mellitus (DM2), postprandial hyperglycemia is important in the development of the disease; Current treatment for Type 2 diabetes remains inadequate so prevention is preferable [3]. The major strategy for the management of Type 2 diabetes is to decrease postprandial hyperglycemia. The α-glucosidase inhibitors, such as acarbose, have been used in the clinic to control blood glucose increase, especially postprandial, in DM2 [4]. Herbal medicine is predominantly available for the treatment of diabetes and the main advantages of the use of herbal drugs are effectiveness, safety, and acceptability [5]. The mechanism of action of these medicinal plant of its products involve retarding the absorption of glucose by inhibiting the carbohydrate hydrolyzing enzymes, such as pancreatic amylase and α-glucosidase and through the inhibition of these enzymes, medicinal plants can effectively control the postprandial rise in blood glucose. Several medicinal plants have a high potential in inhibiting α-amylase enzyme activity [6]. Annona muricata (A. muricata) is a tropical plant species belonging to family Annonaceae and known for its many ethnomedicinal uses. All parts of Annona are used in natural medicine in the tropics. It is considered to be a good source of natural antioxidants for various diseases. Traditionally, the leaves are used for headaches, insomnia, cystitis, liver problems, and diabetes, antitumor, anti-inflammatory. The health benefits of this plant have been attributed to their unique phytochemical composition [7,8,9,10]. There is, however, a paucity of information that demonstrates the comparative α-amylase and α-glucosidase inhibitory properties of different extracts and fractions, of the various parts of the plant (fruit-pulp, leaf, stem-bark and root-bark) with relation to its anti-diabetic activity. Hence, the present study was carried out to investigate the comparative α-amylase and α-glucosidase inhibitory potentials of methanolic, ethyl acetate and dichloromethane extracts of the fruit-pulp, leaf, stem-bark and root-bark of Annona muricata and also to determine the possible modes and mechanisms of inhibition of these enzymes by these extracts. The justification for this research is the severe gastrointestinal side effects such as abdominal pain, flatulence and diarrhea that have been reported in the patients after the use of conventional drugs [11, 12], so that the identification of natural interventions through the use of plants like Annona muricata, becomes beneficial due to their very minimal or absence of negative side effects. Collection of plant materials and preparation of extracts Fresh parts of the plant which includes the fruit-pulp, leaf, stem-bark, and root-bark were collected from household gardens around the University of Benin, Edo state, Nigeria. The plant was identified by Dr. Bamidele of the Department of Plant Biology and Biotechnology, University of Benin, and authenticated by Professor Mc Idu of the same department. A voucher specimen number, UBHa 0205, was deposited at the Herbarium of Department of Plant Biology and Biotechnology, University of Benin. The properly washed plant samples were pulverized after drying at room temperature (about 25 °C). The pulverized plant parts were extracted by macerating 500 g of each part in methanolic, ethyl acetate and dichloromethane for 72 h after which they were filtered with a muslin cloth and the filtrate was concentrated to dryness using a rotary evaporator. The concentrated extracts were stored in an airtight container (percentage yields of methanolic, 46.30%, ethyl acetate, 31.83% and dichloromethane, 29.44%) and kept in the freezer at 4 °C until use. In vivo studies using methanolic extracts of the various parts of Annona muricata A total of 120 male albino Wistar rats weighing 190 g–220 g were bought and kept in galvanized cages in the Department of Biochemistry animal house. They were divided into six groups containing 5 rats each. They were allowed access to feed and water ad libitum on a 12 h light / 12 h dark cycle. The animals were acclimatized for 2 weeks before the commencement of the administration of the extract. Approval of the research ethics committee on guidelines and principles for the handling of animals, College of Medical Sciences, University of Benin (CMR/REC/2014/57) was adopted and strictly adhered to. The design for the administration of methanolic extracts of various parts of the plants is shown below: Doses administered (mg/kg b.w.) Extract Fruits administered Leaf Stem-bark Root-bark 0(control) 5 rats 5 rats 5 rats 5 rats 100 5rats 5 rats 5 rats 5 rats 200 5 rats 5 rats 5 rats 5 rats Administration of extracts The extracts were administered daily with the aid of an orogastric tube. Care was taken not to inflict injuries to the rats. Biochemical assay At the end of the 28-day experimental period, the animals fasted overnight and blood samples were collected into plain sample bottles and allowed to clot for 30 min after which it was centrifuged at 3000 rpm for 15 min. The serum was collected separately and used for serum amylase assay. Serum amylase activity was measured using the method of Wallenfels et al. [13]. Pancreatic tissues were also excised and homogenized in ice-cold normal saline (1:4 w/v), centrifuged at 1000 g for 15 min and the supernatant was used for tissue pancreatic amylase assay. In vitro studies using extracts of the various parts of Annona muricata α – amylase inhibition assay Serial dilutions of the plant extracts between 0 to 200 μL were prepared by mixing with 500 μL Sodium phosphate buffer (0.02 mol/dm3, at pH = 6.9 and 0.006 NaCl as the stabilizer), containing pancreatic alpha-amylase (0.50 mg/mL) of Porcine origin (EC 3.2.1.1). The mixtures were incubated at 37 °C for 5 min, and then 500 μL of starch solution (1 mg/100 mL in 0.02 mol/dm3 sodium buffer at pH of 6.9 with 0.006 NaCl) was introduced into the reaction mixtures. The reaction mixtures were subsequently, incubated at 37oC for 5 min in a water bath. The reaction was then stopped using 1.0 mL dinitrosalicylic acid (DNSA) and further incubated in boiling water for 5 min. The blank sample had no starch solution and enzyme in it, while the control (reference sample) had all the reagents and the enzyme except the starch solution. Acarbose served as positive control. When the reaction mixtures were cool, absorbance was read at 540 nm [14, 15]. $$ \mathrm{Percentage}\kern0.24em \upalpha \hbox{-} \mathrm{amylase}\kern0.17em \mathrm{inhibition}\;\left(\%\right)=\frac{Aref- Asample}{Aref}\times 100. $$ α – glucosidase inhibition assay Serial dilutions of the plant extracts between 0 to 200 μL were prepared by mixing with 100 μL Sodium phosphate buffer (0.1 mol/dm3, at pH = 6.9) containing alpha-glucosidase (EC 3.2.1.2; 1.0 U/mL) and then incubating at 37 °C for 5 min. 0.05 mL of para-nitrophenyl-α-D-glucopyranoside (5.0mmole/dm3) solution in Sodium phosphate buffer (0.1 mol/dm3, at pH = 6.9) was added to the reaction mixture and incubated at 37 °C for 5 min. The reaction was then stopped using 1.0 mL dinitrosalicylic acid (DNSA) and further incubated in boiling water for 5 min. The reaction mixtures were allowed to cool and then absorbance read at 405 nm [16]. The blank sample had no starch solution and enzyme in it, while the control (reference sample) had all the reagents and the enzyme except the starch solution. Acarbose served as the positive control. $$ \mathrm{Percentage}\kern0.24em \upalpha \hbox{-} \mathrm{glucosidase}\kern0.17em \mathrm{inhibition}\;\left(\%\right)=\frac{Aref- Asample}{Aref}\times 100. $$ Investigation of inhibitory concentrations (IC50), modes and mechanisms of inhibition of α-amylase and α-glucosidase activity (enzyme kinetics) The mode and mechanisms of interactions between the enzymes and extracts (as well as, isolated compound) were studied using the various kinetic interpolations, viz., sigmoid (Hill's slope), a hyperbola (maximum binding capacity, Bmax, and dissociation constant, Kd), and Michaelis-Menten's (Km and Vmax). These were used to determine the IC50 of the extracts. The Bmax and Kd demonstrated the degree of binding and period of inhibition, which gave an idea of the level of efficacy of the extracts. The IC50 gave an idea of the level of potency of the extracts in inhibiting the enzymes. The acetogenin, 15-acetyl guanacone, which has previously been isolated from the leaf ethyl acetate extract and characterized by Agu et al. [8], was subjected to molecular docking analyses obtaining their binding affinities (Ba), in an attempt to determine whether the influence of the extracts originated from this compound as the mechanistic molecule. This was compared against a standard anti-diabetic drug, metformin. Protein preparation and generation of 3-D structure using homology modeling The starting structure (PDB ID: 4GL7) required for docking was retrieved from the protein data bank repository (HTTP: //www.rcsb.org). Prior to docking, water and ligand coordinates were deleted. α-Amylase and α-glucosidase were downloaded from www.pubmed.org and used to model the starting structure of the elucidated compound used in the current study. Homology modeling was done on Swiss Model Server (http://swissmodel.expasy.org). This requires one sequence of a known 3D structure with significant similarity with the target sequence. The coordinate file of template from protein data bank (PDB ID: 4GL7) was used to model the 3D structure of VEGF2. Ligand preparation for docking The 3D structure of the elucidated compound was built using Marvin-sketch and optimized for docking studies. The optimized ligand molecules (the compound) was docked into a refined aromatase model using "Ligand-Fit" in the Auto-Dock 4.2. Molecular docking calculations These were carried out through BSP-SLIM and Auto-dock. The modeled structures of VEGF2 and the elucidated compound was loaded on BSP-SLIM server and Auto-dock/Vina and all the water molecules were removed prior to the upload. BSP-SLIM is known as a blind docking method, which primarily uses the structural template match to identify putative ligand binding sites, followed by fine-tuning and ranking of ligand conformations in the binding sites through the SLIM-based shape and chemical feature comparisons [17]. Protein snapshots were taken and analyzed using PYMOL. The data were entered into Microsoft Excel v.13, prior to analysis. The Graph Pad Prism Software, inc., (version 6.01, 2012) was used to analyze to obtain the means, SEM and IC50, using the data using the One-way analysis of variance and unpaired sample students' T-test. The level of significance was taken as p ≤ 0.05. The sigmoid (Hill's slope), a hyperbola (maximum binding capacity, Bmax, and dissociation constant, Kd), and Michaelis-Menten's (Km and Vmax) were also determined using the Graph Pad Prism Software. In vivo study (Tables 1 and 2) Table 1 Dose-response characteristics of the influence of Annona muricata methanolic extracts on plasma amylase activity Table 2 Dose-enzyme response characteristics of the influence of Annona muricata methanolic extracts on tissue amylase activity In vitro studies (Tables 3, 4, 5, 6, 7 and 8) Table 3 Dose-response characteristics of the influence of Annona muricata methanolic extracts on α-amylase activity Table 4 Dose-response characteristics of the influence of Annona muricata ethyl acetate extracts on α-amylase activity Table 5 Dose-response characteristics of the influence of Annona muricata dichloromethane extracts on α-amylase activity Table 6 Dose-response characteristics of the influence of Annona muricata methanol extracts on α-glucosidase activity Table 7 Dose-response characteristics of the influence of Annona muricata ethyl acetate extracts on α-glucosidase activity Table 8 Dose-response characteristics of the influence of Annona muricata dichloromethane extracts on α-glucosidase activity α -Amylase catalyzes the hydrolysis of α-(1, 4)-D-glycosidic linkages of starch and other glucose polymers. Inhibitors of this enzyme could be of use in the treatment or management of diabetes. In the management of diabetic patients, the inhibition of the enzymes that are involved in the breakdown of carbohydrate e.g., α-amylase and α-glucosidase, leads to inhibition of starch hydrolysis, thus, resulting in a decreased level of glucose available for assimilation into the blood (regulating postprandial glycemic level). Several In vitro studies have confirmed the inhibitory potential of medicinal plants on α-amylase and α-glucosidase activities and in some cases, the bioactive compounds, which presumably are responsible for this mechanism of action, have been identified. However, studies conducted in animal models are few [18] and even less abundant are the studies performed in human subjects. In vivo investigations In this study, methanol extracts of the different part of Annona muricata (fruit-pulp, leaf, stem-bark, and root-bark) were investigated (In vivo) for their potential inhibitory effects on plasma and pancreatic tissue amylase activities at varying doses of 0 to 800 mg/kg (Figs. 1 and 2) body weight in male albino Wistar rats. Amongst the parts of the plants tested, the fruit-pulp showed the highest inhibitory effect on plasma and tissue amylase activities. The root-bark showed the second highest inhibitory effect, this was followed by the leaf extract and then the stem-bark extract. As demonstrated in Table 1, the IC50 of the methanol extract of Annona muricata fruit-pulp on plasma amylase (40.36 mg/kg) gave a better response compared to the other part extracts. However, on the whole, the fruit-pulp and root-bark methanol extract better inhibited the plasma α-amylase as represented by their Bmax, Kd and Vmax values. Also from Table 2, the methanol extracts of the fruit-pulp and root-bark demonstrated a better Bmax compared to leaf and stem-bark extracts that demonstrated better IC50 and Kd. Summarily, the IC50, Kd and Vmax represented how potent the extracts were (i.e., the lower these enzyme kinetic properties, the higher the potencies and abilities to delay the speed of the reactions catalyzed by the enzymes), while the Bmax described the possible efficacies of the extracts (i.e., the higher this property, the higher the efficacies and the abilities of the moieties involved the active sites of the enzymes to bind actively, firmly and efficiently to inhibit the speed of catalysis and hydrolysis of the carbohydrate substrate). Dose-response line plot of the influence of Annona muricata methanolic extract on plasma amylase activity Dose-response line plot of the influence of Annona muricata methanolic extract on tissue amylase activity However, since these plant extracts (especially, the fruit-pulp extract) were observed to be effective in inhibiting α-amylase and α-glucosidase activities, which is one of the mechanisms involved in controlling postprandial glycemia, these extracts may be helpful in the management of obesity and diabetes. In vitro investigations One of the major strategies used in the treatment and management of diabetes which has been proven to be most effective is to decrease post-prandial hyperglycemia, at the intestinal-blood interphase, by targeting the point of carbohydrate hydrolysis and mobilization into the blood. The biochemical argument is that, if this point can be effectively checked, then the major underlying cause of diabetes ab initio (hyperglycemia) can be prevented. This can be achieved by using inhibitors such as acarbose, miglitol, and voglibose [4]. However, severe gastrointestinal side effects such as abdominal pain, flatulence, and diarrhea seen in the patients [11, 12] have been linked to the use of these drugs. These side effects can be explained due to the fermentation of undigested carbohydrates by resident bacteria in the colon which they are able to reach as a result of the complete inhibition of α-amylase [4]. In addition, it is thought that some of these drugs may increase the incidence of renal tumors, hepatic injuries, acute hepatitis and pancreatitis [19]. Therefore, there is a need to identify and explore the amylase inhibitors from natural sources having fewer side effects. Several studies have revealed that α-amylase and α-glucosidase activity have a great influence on blood glucose level and their inhibition could significantly decrease the postprandial rise in blood glucose [20]. Several inhibitors of α-amylase and α-glucosidase have been isolated from medicinal plants to serve as an alternative drug with increased potency and lesser adverse effects than existing synthetic drugs [21, 22]. α-amylase inhibitory activity has been demonstrated in a number of plant extracts including Hibiscus sabdariffa L. (Malvaceae) [23], Artocarpus heterophyllus Lam. (Moraceae) [24], Amaranthus hypochondriacus L. (Amaranthaceae) [25], Punica granatum L. (Punicaceae), Mangifera indica L. (Anacardiaceae) [6], Arecae seeds (Palmaceae) and Corni fruits (Cornaceae) [26]. The obtained results of the In vitro α-amylase inhibitory potential of methanolic, ethyl acetate and dichloromethane extracts of the fruit-pulp, leaf, stem-bark, and root-bark of A. muricata is shown in Figs. 3, 4, 5, 6, 7 and 8. For methanolic extract, the stem-bark (IC50 = 1.843 mg/dL) showed the highest inhibitory effect followed by leaf extract, then fruit-pulp and root-bark with IC50 of 1.846 mg/dL, 2.163 mg/dL and 2.177 mg/dL, respectively, compared to Acarbose (1.722 mg/dL). At the highest concentration of 3.20 mg/dL, the methanol extract of fruit-pulp, leaf, stem-bark and root-bark showed a 63.46% and 77.49%, 79.22% and 67.33% inhibitory effect respectively on α- amylase activity as against the 91.51% shown by Acarbose. The high total phenol content of the methanolic leaf and fruit-pulp extracts of A. muricata [7] may be responsible for this high inhibitory effect. This is consistent with earlier studies where α-amylase and α-glucosidase inhibitory effects of plant foods were attributed to their phenolic constituents [27,28,29]. The observed effects may also be due to the presence of more chemical constituents such as acetogenins, lignans (phyllanthin and hypophyllanthin), terpenes, flavonoids (quercetin, quercetin, rutin), and alkaloids in the methanolic extracts of the fruit-pulp and leaves. This observed higher inhibitory potencies of the leaf and stem-bark was corroborated by their higher Bmax and Vmax values (Table 3). Dose-response curve of α-amylase inhibition by methanolic extract of Annona muricata fruit-pulp, leaf, stem-bark and root-bark and acarbose (reference standard) Percentage inhibition of α-amylase by methanolic extract of Annona muricata fruit-pulp, leaf, stem-bark, root-bark and acarbose (reference standard) Dose-response curve of α-amylase inhibition by ethyl acetate extract of Annona muricata fruit-pulp, leaf, stem-bark, root-bark and acarbose (reference standard) Percentage inhibition of α-amylase by ethyl acetate extract of Annona muricata fruit-pulp, leaf, stem-bark, root-bark and acarbose (reference standard) Dose-response curve of α-amylase inhibition by dichloromethane extract of Annona muricata fruit-pulp, leaf, stem-bark, root-bark and acarbose (reference standard) Percentage inhibition of α-amylase by dichloromethane extract of Annona muricata fruit-pulp, leaf, stem-bark, root-bark and acarbose (reference standard) The result of the α-amylase inhibitory activities of the ethyl acetate extracts of the fruit-pulp, leaf, stem-bark and root-bark showed that the fruit-pulp and root-bark extracts gave the higher inhibitory effect with IC50 of 2.196 mg/dL and 2.997 mg/dL, respectively, compared with the standard Acarbose (IC50 = 1.722 mg/dL), and also the higher Kd value compared with the leaf and stem-bark extracts with the higher Bmax and Vmax (Table 4). For the dichloromethane extracts (Table 5), the leaf and fruit-pulp extracts had the highest inhibitory effects (IC50 of 2.127 mg/dL and 2.761 mg/dL, respectively; Kd of − 0.044 and − 0.054 mg/dL, respectively) as against that of Acarbose (IC50 of 1.722 mg/dL; Kd of − 0.079). The result of the α-glucosidase inhibitory activity is demonstrated by Figs. 9, 10, 11, 12, 13 and 14. For the methanolic extracts, the leaf extract (IC50 of 1.623 mg/dL) showed the highest α-glucosidase inhibitory effect; this was closely followed by the stem-bark, root-bark and lastly fruit-pulp (IC50 of 2.077 mg/dL, 6.483 mg/dL and 6.734 mg/dL, respectively); this observed higher potency of the methanolic leaf extract against α-glucosidase is strongly corroborated by the higher Bmax (51.49 U/L), lower Kd (− 0.030 mg/dL) and higher Vmax (60.12 U/L) compared to the other methanolic extracts and acarbose (Table 6). For the ethyl acetate extract, the fruit-pulp gave the highest inhibitory potency against α-glucosidase activity compared to the leaf extract, then the stem-bark and root-bark; fruit-pulp had an IC50 of 1.717 mg/dL (Kd of − 0.030 mg/dL) better than acarbose with an IC50 of 1.722 mg/dL (Kd of − 0.079 mg/dL). However, the ethyl acetate leaf extract demonstrated a better efficacy with Bmax of 51.49 U/L and Vmax of 60.12 U/L, compared to acarbose with Bmax of 51.17 U/L and Vmax of 63.07 U/L (Table 7). For the dichloromethane extract, the root-bark gave the highest α-glucosidase inhibitory effect (IC50 0f 4.254 mg/dL), closely followed by leaf extract, stem-bark and fruit-pulp with IC50 of 7.188 mg/dL, 17.91 mg/dL, and 27.86 mg/dL, respectively (Table 8). The root-bark dichloromethane extract demonstrated the highest Bmax (48.50 U/L) and Vmax (57.02 U/L), compared to the other dichloromethane extracts. These results obtained for the extracts of the different parts of Annona muricata clearly shows that leaf and fruit-pulp exhibited a better inhibitory effect than the standard drug acarbose and therefore may be used as natural sources of management of post-prandial hyperglycemia. Also observed in this study was that the extracts exhibited a better α-glucosidase inhibitory activity (potency and efficacy) compared to the α-amylase activity. Earlier reports by Stephen et al. [30] on the distribution of phenolic contents, anti-diabetic potentials, anti-hypertensive properties, and anti-oxidative effects of Soursop (Annona muricata L.) fruit parts in vitro showed that Soursop extracts significantly inhibited α-glucosidase more than α-amylase. Kwon et al. [31] had earlier suggested that natural α-glucosidase inhibitors from plants had been shown to have strong inhibitory activity against α-glucosidase and therefore can be potentially used as an effective therapy for the management of postprandial hyperglycemia with minimal side effects. Dose-response curve of α-glucosidase inhibition by methanolic extract of Annona muricata fruit-pulp, leaf, stem-bark, root-bark and acarbose (reference standard) Percentage inhibition of α-glucosidase by methanolic extract of Annona muricata fruit-pulp, leaf, stem-bark, root-bark and acarbose (reference standard) Dose-response curve of α-glucosidase inhibition by ethyl acetate extract of Annona muricata fruit-pulp, leaf, stem-bark, root-bark and acarbose (reference standard) Percentage inhibition of α-glucosidase by ethyl acetate extract of Annona muricata fruit-pulp, leaf, stem-bark, root-bark and acarbose (reference standard) Percentage inhibition of α-glucosidase by dichloromethane extracts of Annona muricata fruit-pulp, leaf, stem-bark, root-bark and acarbose (reference standard) Percentage inhibition of α-glucosidase by dichloromethane extract of Annona muricata fruit-pulp, leaf, stem-bark, root-bark and acarbose (reference standard) To suggest or predict the nature of inhibition (competitive, non-competitive, uncompetitive, or mixed exhibited by the extracts) data are often analyzed by a set of techniques that linearize inherently non-linear relationships such as the Lineweaver-Burke's plot. We tried to understand the inhibition mechanism utilized against α-amylase and α-glucosidase by extracts of different parts of the Annona muricata. When compared with the standard drug (acarbose), all the extracts showed decreases in Km and Vmax, thus, suggesting that all extracts may have exhibited an uncompetitive inhibition pattern. This means that the active compound(s) in the extracts bind only to the enzyme-substrate complex (the inhibitor binding is only assessable when the carbohydrate binds to α-amylase and α-glucosidase) and that, the inhibition cannot be reversed by increase in the substrate concentration, i.e., intestinal carbohydrate; these confer a high level of benefits for diabetics indicating that no matter the consumed concentration of carbohydrate, α-amylase, and α-glucosidase will be inhibited preventing or reducing glucose mobilization into the blood postprandial. Thus, to increase α-amylase and α-glucosidase affinity for carbohydrate, the enzyme-substrate complex must be decreased, but with the presence of the inhibitors present in Annona muricata, this becomes unachievable especially during the fed state of the diabetic; uncompetitive inhibition works best when the substrate concentration is high ("a light at the end of the tunnel for diabetics"). In silico investigations In an attempt to narrow this observed potency and efficacy of the leaf and fruit-pulp extracts in uncompetitive inhibition of α-amylase and α-glucosidase, an isolated acetogenin [8] identified as 15-acetyl guanacone was subjected to molecular docking experiments to ascertain its affinity levels of these enzymes compared to a standard, metformin (Figs. 15, 16, 17, 18 and 19). 2D structure of 15-acetyl guanacone (i), the 3D structure of α-amylase (ii), and 3D structure of α-glucosidase (iii) The Binding pose of 15-acetyl guanacone at the active site of α-amylase with a binding energy of − 6.80 kcal/mole (i), and molecular interaction of 15-acetyl guanacone with amino acid residues within the active site of α-amylase (ii) The Binding pose of metformin at the active site of α-amylase with a binding energy of − 4.90 kcal/mole (i), and molecular interaction of metformin with amino acid residues within the active site of α-amylase (ii) The Binding pose of 15-acetyl guanacone at the active site of α-glucosidase with a binding energy of − 7.00 kcal/mole (i), and molecular interaction of 15-acetyl guanacone with amino acid residues within the active site of α-glucosidase (ii) The Binding pose of metformin at the active site of α-glucosidase with a binding energy of − 5.40 kcal/mole (i), and molecular interaction of metformin with amino acid residues within the active site of α-glucosidase (ii) The strong α-amylase and α-glucosidase inhibitory potentials observed in the ethyl acetate extracts of fruit (and leaf) may be linked to the presence of 15-acetyl-guanacone (a compound isolated from the ethyl acetate fraction of fruit-pulp of Annona muricata) [8]. The molecular docking tool has been used to study the inter-relationship between a small molecule and a receptor at the atomic level, which may give the insight to characterize the behavior of small molecules in the binding site of target proteins, as well as to elucidate biochemical processes (amino acids bonded to an active site) [32]. The knowledge of the interaction between compounds and digestive enzymes may be an initial stage toward the synthesis of drug, nutraceuticals or functional foods [32]. It was observed that the binding of metformin (a standard drug) to the active site of α-amylase and α-glucosidase gave binding energies of − 4.90 kcal/mole (α-amylase) and − 5.40 kcal/mole (α-glucosidase), while the binding of 15-acetyl guanacone to the active site of α-amylase and α-glucosidase gave binding energies of − 6.80 kcal/mole and − 7.00 kcal/mole, respectively. Thus the isolated compound demonstrated a better ability to bind to the enzymes than the standard drug, metformin. These observations corroborate the observed significantly higher Bmax and Kd values observed for the fruit-pulp and leaf extracts. The fruit-pulp and leaf of Annona muricata (Soursop) demonstrated significantly high abilities to inhibit α-amylase and α-glucosidase and minimize the rate of glucose assimilation into the blood after feeding. This is evident in the obtained IC50, Bmax, Kd and Vmax values which further suggested that the mechanism of inhibition is that of uncompetitive inhibition, thus conferring an appreciable potency and efficacy for the plant compared to standard drugs. The research also suggested that these acetogenins may be responsible for these remarkable observations as was demonstrated by in silico studies using 15-acetyl guanacone. Thus, Annona muricata can be very beneficial in the treatment and management of hyperglycemia, diabetes, overweight, and obesity, etc. This suggests that better natural remedies for diabetics can be obtained from Annona muricata with minimal or no adverse side effects. All the data for the research have been provided within the research paper. Aguiree F, Brown A, Cho NH et al.,.IDF Diabetes Atlas, 2013. Mitra A. Preparation and effects of cheap salad oil in the Management of Type 2 rural Indian diabetics. J Hum Ecol. 2008;23:27–38. Mitra A. Some salient points in dietary and lifestyle survey of rural Bengal particularly tribal populace in relation to rural diabetes prevalence. Studies on Ethnomedicine. 2008;2(1):51–6. Bischoff H, Puls W, Krause HP, Schutt H, Thomas G. Pharmacological properties of the novel glucosidase inhibitors BAY m 1099 (miglitol) and BAY o 1248. Diabetes Res Clin Pract. 1985;1:53–62. Valiathan MS. Healing plants. Curr Sci. 1998;75(11):1122–7. Prashanth D, Padmaja R, Samiulla DS. Effects of certain plant extracts on α-amylase activity. Fitoterapia. 2001;72:179–81. Agu KC, Okolie NP, Eze GI, Anionye JC, Falodun A. Phytochemical analysis, toxicity profile and hemo-modulatory properties of Annona muricata (soursop). Egyptian Journal of Haematology. 2017;42:36–44. Agu KC, Okolie NP, Falodun A, Erharuyi O, Igbe I, Elekofehinti OO, Oghagbon SE. Isolation and molecular docking experiments of 15-acetylguanacone from Annona muricata Linn. J Appl Sci Environ Manag. 2017;21(2):236–43. Okolie NP, Agu KC, Eze GI. Protective effect of ethanolic leaf extract of Annona muricata Linn on some early events in Cycas-induced colorectal carcinogenesis in rats. Journal of Pharmaceutical Science and Innovation. 2013;2(4):14–21. Okwu DE, Omodamiro OD. Effects of hexane extract and phytochemical content of Xylopia aethiopica and Ocimum gratissimum on the uterus of Guinea pig. Bio-Research. 2005;3(2):40–4. Abirami G, Nagarani Siddhuraju P. In vitro antioxidant, anti-diabetic, cholinesterase and tyrosinase inhibitory potential of fresh juice from Citrus hystrix and C. maxima fruits. Food Sci Human Wellness. 2014;3(1):16–25. Huang THW, Peng G, Kota BP, et al. Anti-diabetic action of Punica granatum flower extract: activation of PPAR-γ and identification of an active component. Toxicol Appl Pharmacol. 2005;207(2):160–9. Wallenfels K, Foldi P, Niermann H, Bender H, Linder D. The enzymatic synthesis, by transglucosylation of a homologous series of glycosidically substituted maltooligosaccharides and their use as amylase substrates. Carbohydr Res. 1978;61:359. Worthington V., 1993. Αlpha-amylase. In Worthington Enzyme Manual: Worthington Biochemical Corp. Freehold, NJ., pp. 36–41. Worthington V., 1993. Maltose-α-glucosidase. In Worthington Enzyme Manual: Worthington Biochemical Corp. Freehold, NJ., pp. 261. Oboh G, Ademiluyi AO, Akinyemi AJ, Henle T, Saliu JA, Schwarzenbolz U. Inhibitory effect of polyphenol-rich extracts of jute leaf (Corchorus olitorius) on key enzyme linked to type 2 diabetes (α-amylase and α-glucosidase) and hypertension (angiotensin I converting) in vitro. J Funct Foods. 2012;4(2):450–8 https://doi.org/10.1016/j.jff.2012.02.003. Gavanji S, Larki B, Mortazaeinezhad F. Bioinformatic prediction of interaction between flavonoids of propolis of honey bee and envelope glycoprotein GP120. International Journal of Scientific Research in Environmental Sciences. 2014;2:85–93. Jia W, Gao W, Tang L. Antidiabetic herbal drugs officially approved in China. Phytother Res. 2003;17(10):1127–34. Shobana S, Sreerama YN, Malleshi NG. Composition and enzyme inhibitory properties of finger millet (Eleusine coracana L.) seed coat phenolics: mode of inhibition of·-glucosidase and pancreatic amylase. Food Chem. 2009;115(4):1268–73. Nair SS, Kavrekar V, Mishra A. In vitro studies on α-amylase and α glucosidase inhibitory activities of selected plant extracts. European Journal of Experimental Biology. 2013;3(1):128–32. Matsui T, Ogunwande IA, Abesundara KJM, Matsumoto K. Anti-hyperglycemic potential of natural products. Mini-Rev Med Chem. 2006;6(3):349–56. Matsuda H, Morikawa T, Yoshikawa M. Antidiabetogenic constituents from several natural medicines. Pure Appl Chem. 2002;74(7):1301–8. Hansawasdi C, Kawabata J, Kasai T. α -amylase inhibitors from roselle (Hibiscus sabdariffa Linn.) tea. Biosci Biotechnol Biochem. 2000;64:1041–3. Kotowaroo MI, Mahomoodally MF, Gurib-Fakim A, Subratty AH. Screening of traditional antidiabetic medicinal plants of Mauritius for possible α -amylase inhibitory effects in vitro. Phytother Res. 2006;20:228–31. Martins JC, Enassar M, Willem R, Wieruzeski JM, Lippens G, Wodak SJ. Solution structure of the main α -amylase inhibitor from amaranth seeds. Eur J Biochem. 2001;268:2379–89. Choi HJ, Kim NJ, Kim DH. Inhibitory effects of crude drugs on α -glucosidase. Arch Pharm Res. 2000;23:261–6. Saravanan S, Parimelazhagan T. In vitro antioxidant, antimicrobial and anti-diabetic properties of polyphenols of Passiflora ligularis Juss, fruit-pulp. Food Sci Human Wellness. 2014;3(2):56–64. Adefegha SA, Oboh G. Inhibition of key enzymes linked to type 2 diabetes and sodium nitroprusside-induced lipid peroxidation in rat pancreas by water extractable phytochemicals from some tropical spices. Pharm Biol. 2012;50(7):857–65. Ademiluyi AO, Oboh G, Aragbaiye FP, et al. Antioxidant properties and In vitro α-amylase and α-glucosidase inhibitory properties of phenolics constituents from different varieties of Corchorus spp. Journal of Taibah University Medical Sciences. 2015;10(3):278–87. Stephen AA, Sunday IO, Ganiyu O. Distribution of phenolic contents, antidiabetic potentials, antihypertensive properties and antioxidative effects of soursop (Annona muricata L.) fruit parts In vitro. Biochem Res Int. 2015;2015:347673. Kwon YI, Apostolidis E, Shetty K. Evaluation of pepper (Capsicum annuum) for management of diabetes and hypertension. J Food Biochem. 2007;31(3):370–85. McConkey BJ, Sobolev V, Edelman M. The performance of current methods in ligand-protein docking. Curr Sci. 2002;83:845–56. We really appreciate the efforts of the Department of Medical Biochemistry, University of Benin, Nigeria, Mr. christopher Afo Ugbodaga and Dr. (Mrs.) Josephine Ofeimun. The research was privately funded by KCA. Department of Medical Biochemistry, School of Basic Medical Sciences, University of Benin, Benin City, Edo State, 300001, Nigeria Kinsgley Chukwunonso Agu , Nkeiruka Eluehike , Reuben Oseikhumen Ofeimun , Godwin Ideho , Marianna Olukemi Ogedengbe & Priscilla Omozokpea Onose Department of Science Laboratory Technology, Faculty of Life Sciences, University of Benin, Benin City, Edo State, 300001, Nigeria Deborah Abile Department of Biochemistry, Adekunle Ajasin University, Ondo, Nigeria Olusola Olalekan Elekofehinti Search for Kinsgley Chukwunonso Agu in: Search for Nkeiruka Eluehike in: Search for Reuben Oseikhumen Ofeimun in: Search for Deborah Abile in: Search for Godwin Ideho in: Search for Marianna Olukemi Ogedengbe in: Search for Priscilla Omozokpea Onose in: Search for Olusola Olalekan Elekofehinti in: KCA provided the funding and designed the research protocol, KCA and NE wrote the manuscript, ROO and DA assisted during the in vivo studies, GI, MOO and POO assisted during the in vitro studies, while OOE assisted during the in silico studies. All authors read and approved the final manuscript. Correspondence to Nkeiruka Eluehike. Written approval of the research ethics committee guideline principles on the handling of animals of the College of Medicine, University of Benin (CMR/REC/2014/57) was adopted and strictly adhered to. All the authors participated in developing the manuscript and grant their consent for onward publication. All the authors declare no competing interests. Agu, K.C., Eluehike, N., Ofeimun, R.O. et al. Possible anti-diabetic potentials of Annona muricata (soursop): inhibition of α-amylase and α-glucosidase activities. Clin Phytosci 5, 21 (2019) doi:10.1186/s40816-019-0116-0 Received: 25 January 2019 Annona muricata α-Amylase α-Glucosidase
CommonCrawl
Numerical preservation of long-term dynamics by stochastic two-step methods DCDS-B Home Partitioned second order method for magnetohydrodynamics in Elsässer variables September 2018, 23(7): 2775-2801. doi: 10.3934/dcdsb.2018163 Domain-growth-induced patterning for reaction-diffusion systems with linear cross-diffusion Anotida Madzvamuse 1,, and Raquel Barreira 2, School of Mathematical and Physical Sciences, Department of Mathematics, University of Sussex, Pevensey Ⅲ, 5C15, Falmer, Brigton, BN1 9QH, England, UK Escola Superior de Tecnologia do Barreiro/IPS, Rua Américo da Silva Marinho-Lavradio, 2839-001 Barreiro, Portugal Corresponding author: [email protected] Received March 2017 Revised March 2018 Published June 2018 Figure(14) / Table(1) In this article we present, for the first time, domain-growth induced pattern formation for reaction-diffusion systems with linear cross-diffusion on evolving domains and surfaces. Our major contribution is that by selecting parameter values from spaces induced by domain and surface evolution, patterns emerge only when domain growth is present. Such patterns do not exist in the absence of domain and surface evolution. In order to compute these domain-induced parameter spaces, linear stability theory is employed to establish the necessary conditions for domain-growth induced cross-diffusion-driven instability for reaction-diffusion systems with linear cross-diffusion. Model reaction-kinetic parameter values are then identified from parameter spaces induced by domain-growth only; these exist outside the classical standard Turing space on stationary domains and surfaces. To exhibit these patterns we employ the finite element method for solving reaction-diffusion systems with cross-diffusion on continuously evolving domains and surfaces. Keywords: Reaction-diffusion, cross-diffusion, pattern formation, growing domains, evolving surfaces, moving and evolving finite elements. Mathematics Subject Classification: 35K57, 35B36, 35R01, 35R37, 65M60. Citation: Anotida Madzvamuse, Raquel Barreira. Domain-growth-induced patterning for reaction-diffusion systems with linear cross-diffusion. Discrete & Continuous Dynamical Systems - B, 2018, 23 (7) : 2775-2801. doi: 10.3934/dcdsb.2018163 W. Bangerth, T. Heister, L. Heltai, G. Kanschat, M. Kronbichler, M. Maier, B. Turcksin and T. D. Young, The deal. ii library, version 8. 1, 2013. arXiv: 1312.2266.Google Scholar J. Bard and I. Lauder, How well does Turing's theory of morphogenesis work?, J. Theor. Bio., 45 (1974), 501-531. Google Scholar R. Barreira, C. M. Elliott and A. Madzvamuse, The surface finite element method for pattern formation on evolving biological surfaces, Journal of Math. Bio., 63 (2011), 1095-1119. doi: 10.1007/s00285-011-0401-0. Google Scholar F. J. Blom, Considerations on the spring analogy, Int. J. Numer. Meth. Fluids, 12 (2000), 647-668. Google Scholar V. Capasso and D. Liddo, Asymptotic behaviour of reaction-diffusion systems in population and epidemic models. The role of cross-diffusion, J. Math. Biol., 32 (1994), 453-463. doi: 10.1007/BF00160168. Google Scholar V. Capasso and D. Liddo, Global attractivity for reaction-diffusion systems. The case of nondiagonal diffusion matrices, J. Math. Anal. and App., 177 (1993), 510-529. doi: 10.1006/jmaa.1993.1274. Google Scholar M. A. J. Chaplain, M. Ganesh and I. G. Graham, Spatio-temporal pattern formation on spherical surfaces: Numerical simulation and application to solid tumour growth, J. Math. Biol., 42 (2001), 387-423. doi: 10.1007/s002850000067. Google Scholar E. J. Crampin, W. W. Hackborn and P. K. Maini, Pattern formation in reaction-diffusion models with nonuniform domain growth, Bull. Math. Biol., 64 (2002), 746-769. Google Scholar K. Deckelnick, G. Dziuk and C. M. Elliott, Computation of geometric partial differential equations and mean curvature flow, Acta Numer., 14 (2005), 139-232. doi: 10.1017/S0962492904000224. Google Scholar A. Donna and C. Helzel, A finite volume method for solving parabolic equations on logically Cartesian curved surface meshes, SIAM J. Sci. Comput., 31 (2009), 4066-4099. doi: 10.1137/08073322X. Google Scholar G. Dziuk and C. M. Elliott, Surface finite elements for parabolic equations, J. Comp. Math., 25 (2007), 385-407. Google Scholar G. Dziuk and C. M. Elliott, Eulerian finite element method for parabolic PDEs on implicit surfaces, Interfaces Free Bound, 10 (2008), 119-138. doi: 10.4171/IFB/182. Google Scholar G. Dziuk and C. M. Elliott, An Eulerian approach to transport and diffusion on evolving implicit surfaces, Comput. Vis. Sci., 13 (2010), 17-28. doi: 10.1007/s00791-008-0122-0. Google Scholar G. Dziuk and C. M. Elliott, Finite element methods for surface PDEs, Acta Numer., 22 (2013), 289-396. doi: 10.1017/S0962492913000056. Google Scholar C. M. Elliott, B. Stinner, V. Styles and R. Welford, Numerical computation of advection and diffusion on evolving diffuse interfaces, IMA J. Numer. Anal., 31 (2011), 786-812. doi: 10.1093/imanum/drq005. Google Scholar G. Gambino, M. C. Lombardo and M. Sammartino, Turing instability and traveling fronts for nonlinear reaction-diffusion system with cross-diffusion, Maths. Comp. in Sim., 82 (2012), 1112-1132. doi: 10.1016/j.matcom.2011.11.004. Google Scholar G. Gambino, M. C. Lombardo and M. Sammartino, Pattern formation driven by cross-diffusion in 2-D domain, Nonlinear Analysis: Real World Applications, 14 (2013), 1755-1779. doi: 10.1016/j.nonrwa.2012.11.009. Google Scholar A. Gierer and H. Meinhardt, A theory of biological pattern formation, Kybernetik, 12 (1972), 30-39. Google Scholar J. B. Greer, A. L. Bertozzi and G. Sapiro, Fourth order partial differential equations on general geometries, J. Comput. Phys., 216 (2006), 216-246. doi: 10.1016/j.jcp.2005.11.031. Google Scholar G. Hetzer, A. Madzvamuse and W. Shen, Characterization of Turing diffusion-driven instability on evolving domains, Discrete Cont. Dyn. Syst., 32 (2012), 3975-4000. doi: 10.3934/dcds.2012.32.3975. Google Scholar S. E. Hieber and P. Koumoutsakos, A Lagrangian particle level set method, J. Comput. Phys., 210 (2005), 342-367. doi: 10.1016/j.jcp.2005.04.013. Google Scholar M. Iida and M. Mimura, Diffusion, cross-diffusion an competitive interaction, J. Math. Biol., 53 (2006), 617-641. doi: 10.1007/s00285-006-0013-2. Google Scholar S. Kovács, Turing bifurcation in a system with cross-diffusion, Nonlinear Analysis, 59 (2004), 567-581. doi: 10.1016/j.na.2004.07.025. Google Scholar O. Lakkis, A. Madzvamuse and C. Venkataraman, Implicit-explicit timestepping with finite element approximation of reaction-diffusion systems on evolving domains, SIAM Journal on Numerical Analysis, 51 (2013), 2309-2330. doi: 10.1137/120880112. Google Scholar C. B. Macdonald, B. Merriman and S. J. Ruuth, Simple computation of reaction- diffusion processes on point clouds, Proc. Nat. Acad. Sci. USA., 110 (2013), 9209-9214. doi: 10.1073/pnas.1221408110. Google Scholar C. B. Macdonald and S. J. Ruuth, The implicit closest point method for the numerical solution of partial differential equations on surfaces, SIAM J. Sci. Comput., 31 (2010), 4330-4350. doi: 10.1137/080740003. Google Scholar A. Madzvamuse, R. K. Thomas, P. K. Maini and A. J. Wathen, A numerical approach to the study of spatial pattern formation in the ligaments of arcoid bivalves, Bulletin of Mathematical Biology., 64 (2002), 501-530. Google Scholar A. Madzvamuse, P. K. Maini and A. J. Wathen, A moving grid finite element method applied to a model biological pattern generator, J. Comp. Phys., 190 (2003), 478-500. doi: 10.1016/S0021-9991(03)00294-8. Google Scholar A. Madzvamuse, A. J. Wathen and P. K. Maini, A moving grid finite element method for the simulation of pattern generation by Turing models on growing domains, J. Sci. Comp., 24 (2005), 247-262. doi: 10.1007/s10915-004-4617-7. Google Scholar A. Madzvamuse, Time-stepping schemes for moving grid finite elements applied to reaction-diffusion systems on fixed and growing domains, J. Sci. Phys., 216 (2006), 239-263. doi: 10.1016/j.jcp.2005.09.012. Google Scholar A. Madzvamuse, A modified backward Euler scheme for advection-reaction-diffusion systems. Mathematical modeling of biological systems, Mathematical Modeling of Biological Systems, 1 (2007), 183-190. Google Scholar A. Madzvamuse, E. A. Gaffney and P. K. Maini, Stability analysis of non-autonomous reaction-diffusion systems: the effects of growing domains, J. Math. Biol., 61 (2010), 133-164. doi: 10.1007/s00285-009-0293-4. Google Scholar A. Madzvamuse, H. S. Ndakwo and R. Barreira, Cross-diffusion-driven instability for reaction-diffusion systems: Analysis and simulations, Journal of Math. Bio., 70 (2014), 709-743. doi: 10.1007/s00285-014-0779-6. Google Scholar A. Madzvamuse and R. Barreira, Exhibiting cross-diffusion-induced patterns for reaction-diffusion systems on evolving domains and surfaces, Physical Review E, 90 (2014). 043307-1-043307-14. ISSN 1539-3755.Google Scholar A. Madzvamuse, H. S. Ndakwo and R. Barreira, Stability analysis of reaction-diffusion models on evolving domains: The effects of cross-diffusion, Discrete and Continuous Dynamical Systems. Series A., 36 (2016), 2133-2170. doi: 10.3934/dcds.2016.36.2133. Google Scholar P. K. Maini, E. J. Crampin, A. Madzvamuse, A. J. Wathen and R. D. K. Thomas, Implications of Domain Growth in Morphogenesis, Mathematical Modelling and Computing in Biology and Medicine: Proceedings of the 5th European Conference for Mathematics and Theoretical Biology Conference, 2002.Google Scholar M. S. McAfree and O. Annunziata, Cross-diffusion in a colloid-polymer aqueous system, Fluid Phase Equilibria, 356 (2013), 46-55. Google Scholar J. D. Murray, Mathematical Biology. II, volume 18 of Interdisciplinary Applied Mathematics. Springer-Verlag, New York. Third edition. Spatial models and biomedical applications, 2003. Google Scholar S. Osher and R. Fedkiw, Level Set Methods and Dynamic Implicit Surfaces, volume 153 of Applied Mathematical Sciences. Springer-Verlag, New York, 2003. doi: 10.1007/b98879. Google Scholar I. Prigogine and R. Lefever, Symmetry breaking instabilities in dissipative systems. Ⅱ, J. Chem. Phys., 48 (1968), 1695-1700. Google Scholar F. Rossi, V. K. Vanag, E. Tiezzi and I. R. Epstein, Quaternary cross-diffusion in water-in-oil microemulsions loaded with a component of the Belousov-Zhabotinsky reaction, The Journal of Physical Chemistry B, 114 (2010), 8140-8146. Google Scholar R. Ruiz-Baier and C. Tian, Mathematical analysis and numerical simulation of pattern formation under cross-diffusion, Nonlinear Analysis: Real World Applications, 14 (2013), 601-612. doi: 10.1016/j.nonrwa.2012.07.020. Google Scholar A. Schmidt and K. G. Siebert, Design of Adaptive Element Software - The Finite Element Toolbox ALBERTA, vol. 42 of Lecture Notes in Computational Science and Engineering, Springer, 2005. Google Scholar J. Schnakenberg, Simple chemical reaction systems with limit cycle behaviour, J. Theor. Biol., 81 (1979), 389-400. doi: 10.1016/0022-5193(79)90042-0. Google Scholar J. A. Sethian, Level Set Methods and Fast Marching Methods, volume 3 of Cam- bridge Monographs on Applied and Computational Mathematics. Cambridge University Press, Cambridge, second edition. Evolving interfaces in computational geometry, fluid mechanics, computer vision, and materials science, 1999. Google Scholar L. Tian, Z. Lin and M. Pedersen, Instability induced by cross-diffusion in reaction-diffusion systems, Nonlinear Analysis: Real World Applications, 11 (2010), 1036-1045. doi: 10.1016/j.nonrwa.2009.01.043. Google Scholar A. Turing, On the chemical basis of morphogenesis, Phil. Trans. Royal Soc. B., 237 (1952), 37-72. doi: 10.1098/rstb.1952.0012. Google Scholar V. K. Vanag and I. R. Epstein, Cross-diffusion and pattern formation in reaction diffusion systems Physical Chemistry Chemical Physics, 17 (2007), 037110, 11 pp. doi: 10.1063/1.2752494. Google Scholar A. Vergara, F. Capuano, L. Paduano and R. Sartorio, Lysozyme mutual diffusion in solutions crowded by poly(ethylene glycol), Macromolecules, 39 (2006), 4500-4506. Google Scholar Z. Xie, Cross-diffusion induced Turing instability for a three species food chain model, J. Math. Analy. and Appl., 388 (2012), 539-547. doi: 10.1016/j.jmaa.2011.10.054. Google Scholar A. M. Zhabotinsky, A history of chemical oscillations and waves, Chaos, 1 (1991), 379-386. Google Scholar J.-F. Zhang, W.-T. Li, Wang, (2011). Turing patterns of a strongly coupled predator-prey system with diffusion effects, Nonlinear Analysis, 74 (2001), 847–858. doi: 10.1016/j.na.2010.09.035. Google Scholar E. P. Zemskov, V. K. Vanag and I. R. Epstein, Amplitude equations for reaction-diffusion systems with cross-diffusion. Phys. Rev. E., 84 (2011), 036216.Google Scholar Figure 1. Phase-diagrams corresponding to the non-autonomous system of ordinary differential equations (3.1)-(3.2) with exponential growth. A stable limit cycle or spiral point exists depending on the choice of the parameter values $a$ and $b$. (a) $a = 0.1$, $b = 0.75$, (b) $a = 0.15$, $b = 0.6$, (c) $a = 0.15$, $b = 0.5$, (d) $a = 0.1$, $b = 0.1$, (e) $a = 0.1$, $b = 0.75$, and (f) $a = 0.12$, $b = 0.5$. Other parameter values are fixed as follows: $r = 0.01$, $\gamma = 200$, $m = 2$, $\kappa = 2$ and $A = 1$ Figure 5. (a) Snap-shots of the evolving parameter space for exponential growth rate with diffusion coefficient $d = 10, $ linear cross-diffusion coefficients $d_u = d_v = 0$, with $\gamma = 200$ and growth rate $r = 0.01$. We select the model kinetic parameter values $a = 0.1$ and $b = 0.75$ from the green parameter space in (a). (b)- (e) Finite element numerical simulations exhibiting the formation of spatial structure corresponding to the chemical specie $u$ during growth development. In the absence of domain growth, these patterns are non-existent Figure 6. (a) Snap-shots of the evolving parameter space for exponential growth rate with diffusion coefficient $d = 10, $ linear cross-diffusion coefficients $d_u = 1, d_v = 0$, with $\gamma = 200$ and growth rate $r = 0.01$. We select the model kinetic parameter values $a = 0.1$ and $b = 0.75$ from the green parameter space in (a). (b)- (h) Finite element numerical simulations exhibiting the formation of spatial structure corresponding to the chemical specie $u$ during growth development. Note that when the domain is not sufficiently large enough, no patterns are observed Figure 7. (a) Snap-shots of the evolving parameter space for exponential growth rate with diffusion coefficient $d = 10, $ linear cross-diffusion coefficients $d_u = 0, d_v = 1$, with $\gamma = 200$ and growth rate $r = 0.01$. We select the model kinetic parameter values $a = 0.15$ and $b = 0.6$ from the green parameter space in (a). (b)- (e) Finite element numerical simulations exhibiting the formation of spatial structure corresponding to the chemical specie $u$ during growth development Figure 8. (a) Snap-shots of the evolving parameter space for exponential growth rate with diffusion coefficient $d = 10, $ linear cross-diffusion coefficients $d_u = 1, d_v = 1$, with $\gamma = 200$ and growth rate $r = 0.01$. We select the model kinetic parameter values $a = 0.15$ and $b = 0.5$ from the green parameter space in (a). (b)- (f) Finite element numerical simulations exhibiting the formation of spatial structure corresponding to the chemical specie $u$ during growth development Figure 9. (a) Snap-shots of the evolving parameter space for exponential growth rate with diffusion coefficient $d = 1, $ linear cross-diffusion coefficients $d_u = 1, d_v = 0.5$, with $\gamma = 200$ and growth rate $r = 0.01$. We select the model kinetic parameter values $a = 0.1$ and $b = 0.1$ from the green parameter space in (a). (b)- (e) Finite element numerical simulations exhibiting the formation of spatial structure corresponding to the chemical specie $u$ during growth development Figure 10. (a) Snap-shots of the evolving parameter space for exponential growth rate with diffusion coefficient $d = 1, $ linear cross-diffusion coefficients $d_u = 1, d_v = 0.5$, with $\gamma = 200$ and growth rate $r = 0.01$. We select the model kinetic parameter values $a = 0.1$ and $b = 0.75$ from the green parameter space in (a). (b) - (f) Finite element numerical simulations exhibiting the formation of spatial structure corresponding to the chemical specie $u$ during growth development Figure 11. (a) Snap-shots of the evolving parameter space for exponential growth rate with diffusion coefficient $d = 1, $ linear cross-diffusion coefficients $d_u = -0.8, d_v = 1$, with $\gamma = 200$ and growth rate $r = 0.01$. We select the model kinetic parameter values $a = 0.15$ and $b = 0.25$ from the green parameter space in (a). (b) - (f) Finite element numerical simulations exhibiting the formation of spatial structure corresponding to the chemical specie $u$ during growth development Figure 12. (a) Snap-shots of the evolving parameter space for exponential growth rate with diffusion coefficient $d = 1, $ linear cross-diffusion coefficients $d_u = 1, d_v = 0.8$, with $\gamma = 200$ and growth rate $r = 0.01$. We select the model kinetic parameter values $a = 0.12$ and $b = 0.5$ from parameter space in $(a)$. (b) - (g) Finite element numerical simulations exhibiting the formation of spatial structure corresponding to the chemical specie $u$ during growth development Figure 13. (a)-(d) Snap-shots of the finite element numerical simulations on the evolving unit sphere under exponential growth rate with diffusion coefficient $d = 1, $ linear cross-diffusion coefficients $d_u = -0.8, d_v = 1$, with $\gamma = 200$ and growth rate $r = 0.01$. We select the model kinetic parameter values $a = 0.15$ and $b = 0.2$ from the domain-induced parameter space shown in Figure 11(a) Figure 14. (a)-(d) Snap-shots of the finite element numerical simulations on the evolving saddle like surface under exponential growth rate with diffusion coefficient $d = 1, $ linear cross-diffusion coefficients $d_u = 1, d_v = 0.5$, with $\gamma = 200$ and growth rate $r = 0.01$. We select the model kinetic parameter values $a = 0.1$ and $b = 0.1$ from the domain-induced parameter space shown in Figure 9(a) Figure 2. Snap-shots of continuously evolving parameter spaces for an exponential evolution of the domain with diffusion coefficient $d = 10$ with varying linear cross-diffusion coefficients $d_u$ and $d_v$ and the growth rate $r$. Row-wise: (a)-(c) $d_u = d_v = 0$, (d)-(f) $d_u = 0$ and $d_v = 1$, (g)-(i) $d_u = 1$ and $d_v = 0$, (j)-(l) $d_u = d_v = 1$. Column-wise: First column: $r = 0.01$, second column, $r = 0.04$ and third column: $r = 0.08$. All plots are exhibited at times $t = 0$ (no growth), $t = 0.1$ and $t = 0.3$ Figure 3. Snap-shots of continuously evolving parameter spaces for an exponential evolution of the domain with diffusion coefficient $d = 1$ with variable linear cross-diffusion coefficients and growth rates: Row-wise: (a)-(c) $d_u = 1$, $d_v = 0.8$, (d)-(f) $d_u = 0.8$, $d_v = 1$, and (g)-(i) $d_u = -0.8$, $d_v = 1$. Column-wise: First column: $r = 0.01$, second column, $r = 0.04$ and third column: $r = 0.08$. All plots are exhibited at times $t = 0$ (no growth), $t = 0.1$ and $t = 0.3$. Such spaces do not exist if linear cross-diffusion is absent (with or without domain growth) Figure 4. Super imposed parameter spaces with no growth, linear, logistic and exponential growth profiles of the domain with diffusion coefficient $d = 1$ and linear cross-diffusion coefficients $d_u = 1$ and $d_v = 0.8$. The growth rate $r$ is varied row-wise accordingly: (a)-(b) $r = 0.01$, (c)-(d) $r = 0.04$, and (e)-(f) $r = 0.08$. Snap-shots of the continuously evolving parameter spaces at times $t = 0$ and $t = 0.1$ (first column) and $t = 0$ and $t = 0.3$ (second column), respectively. Without linear cross-diffusion, these spaces do not exist Table 1. Table illustrating the function $h(t)$ for linear, exponential and logistic growth functions on evolving domains. Similar functions can be obtained for evolving surfaces. $\kappa$ is the carrying capacity (final domain size) corresponding to the logistic growth function. In the above $h (t)$ as defined by (2.6) (or (2.7)) on planar domains (or on surfaces) Type of growth Growth Function $\rho (t)$ $h(t)= \nabla \cdot \mathit{\boldsymbol{v}}$ $q(t)=e^{-\int_{t_0}^t h(\tau) d\tau }$ Linear $\rho(t)=rt+1$ $h(t)=\frac{m\, r}{rt+1}$ $q(t)= \left(\frac{1}{rt+1}\right)^m$ Exponential $\rho(t)=e^{rt}$ $h(t)=m\, r$ $q(t)=e^{-mrt}$ Logistic $\rho (t) = \frac{\kappa Ae^{\kappa rt}}{1+Ae^{\kappa rt}}, \, A = \frac{1}{\kappa-1}$ $h(t)= m\, r\, \Big(\kappa-\rho (t)\Big)$ $q(t) = \left(\frac{e^{-\kappa r t}+A}{1+A}\right)^m$ Anotida Madzvamuse, Hussaini Ndakwo, Raquel Barreira. Stability analysis of reaction-diffusion models on evolving domains: The effects of cross-diffusion. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 2133-2170. doi: 10.3934/dcds.2016.36.2133 Yuan Lou, Wei-Ming Ni, Shoji Yotsutani. Pattern formation in a cross-diffusion system. Discrete & Continuous Dynamical Systems - A, 2015, 35 (4) : 1589-1607. doi: 10.3934/dcds.2015.35.1589 Hideki Murakawa. A relation between cross-diffusion and reaction-diffusion. Discrete & Continuous Dynamical Systems - S, 2012, 5 (1) : 147-158. doi: 10.3934/dcdss.2012.5.147 Georg Hetzer, Anotida Madzvamuse, Wenxian Shen. Characterization of turing diffusion-driven instability on evolving domains. Discrete & Continuous Dynamical Systems - A, 2012, 32 (11) : 3975-4000. doi: 10.3934/dcds.2012.32.3975 Joseph G. Yan, Dong-Ming Hwang. Pattern formation in reaction-diffusion systems with $D_2$-symmetric kinetics. Discrete & Continuous Dynamical Systems - A, 1996, 2 (2) : 255-270. doi: 10.3934/dcds.1996.2.255 Yi Li, Chunshan Zhao. Global existence of solutions to a cross-diffusion system in higher dimensional domains. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 185-192. doi: 10.3934/dcds.2005.12.185 Martino Prizzi. A remark on reaction-diffusion equations in unbounded domains. Discrete & Continuous Dynamical Systems - A, 2003, 9 (2) : 281-286. doi: 10.3934/dcds.2003.9.281 Hongyan Zhang, Siyu Liu, Yue Zhang. Dynamics and spatiotemporal pattern formations of a homogeneous reaction-diffusion Thomas model. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 1149-1164. doi: 10.3934/dcdss.2017062 Robert Stephen Cantrell, Xinru Cao, King-Yeung Lam, Tian Xiang. A PDE model of intraguild predation with cross-diffusion. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3653-3661. doi: 10.3934/dcdsb.2017145 Yuan Lou, Wei-Ming Ni, Yaping Wu. On the global existence of a cross-diffusion system. Discrete & Continuous Dynamical Systems - A, 1998, 4 (2) : 193-203. doi: 10.3934/dcds.1998.4.193 Lianzhang Bao, Wenjie Gao. Finite traveling wave solutions in a degenerate cross-diffusion model for bacterial colony with volume filling. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2813-2829. doi: 10.3934/dcdsb.2017152 Peng Feng, Zhengfang Zhou. Finite traveling wave solutions in a degenerate cross-diffusion model for bacterial colony. Communications on Pure & Applied Analysis, 2007, 6 (4) : 1145-1165. doi: 10.3934/cpaa.2007.6.1145 Lukas F. Lang, Otmar Scherzer. Optical flow on evolving sphere-like surfaces. Inverse Problems & Imaging, 2017, 11 (2) : 305-338. doi: 10.3934/ipi.2017015 Peter E. Kloeden, Meihua Yang. Forward attracting sets of reaction-diffusion equations on variable domains. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1259-1271. doi: 10.3934/dcdsb.2019015 Yuan Lou, Wei-Ming Ni, Shoji Yotsutani. On a limiting system in the Lotka--Volterra competition with cross-diffusion. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 435-458. doi: 10.3934/dcds.2004.10.435 Kousuke Kuto, Yoshio Yamada. Coexistence states for a prey-predator model with cross-diffusion. Conference Publications, 2005, 2005 (Special) : 536-545. doi: 10.3934/proc.2005.2005.536 Kousuke Kuto, Yoshio Yamada. On limit systems for some population models with cross-diffusion. Discrete & Continuous Dynamical Systems - B, 2012, 17 (8) : 2745-2769. doi: 10.3934/dcdsb.2012.17.2745 Daniel Ryan, Robert Stephen Cantrell. Avoidance behavior in intraguild predation communities: A cross-diffusion model. Discrete & Continuous Dynamical Systems - A, 2015, 35 (4) : 1641-1663. doi: 10.3934/dcds.2015.35.1641 F. Berezovskaya, Erika Camacho, Stephen Wirkus, Georgy Karev. "Traveling wave'' solutions of Fitzhugh model with cross-diffusion. Mathematical Biosciences & Engineering, 2008, 5 (2) : 239-260. doi: 10.3934/mbe.2008.5.239 Ciprian G. Gal, Mahamadi Warma. Reaction-diffusion equations with fractional diffusion on non-smooth domains with various boundary conditions. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1279-1319. doi: 10.3934/dcds.2016.36.1279 Anotida Madzvamuse Raquel Barreira
CommonCrawl
Enhanced Light Emission due to Formation of Semi-polar InGaN/GaN Multi-quantum Wells Wan-Ru Zhao1, Guo-En Weng2, Jian-Yu Wang3, Jiang-Yong Zhang1, Hong-Wei Liang4, Takashi Sekiguchi5 & Bao-Ping Zhang1 Nanoscale Research Letters volume 10, Article number: 459 (2015) Cite this article InGaN/GaN multi-quantum wells (MQWs) are grown on (0001) sapphire substrates by metal organic chemical vapor deposition (MOCVD) with special growth parameters to form V-shaped pits simultaneously. Measurements by atomic force microscopy (AFM) and transmission electron microscopy (TEM) demonstrate the formation of MQWs on both (0001) and (\( 1\overline{1}01 \)) side surface of the V-shaped pits. The latter is known to be a semi-polar surface. Optical characterizations together with theoretical calculation enable us to identify the optical transitions from these MQWs. The layer thickness on (\( 1\overline{1}01 \)) surface is smaller than that on (0001) surface, and the energy level in the (\( 1\overline{1}01 \)) semi-polar quantum well (QW) is higher than in the (0001) QW. As the sample temperature is increased from 15 K, the integrated cathodoluminescence (CL) intensity of (0001) MQWs increases first and then decreases while that of the (\( 1\overline{1}01 \)) MQWs decreases monotonically. The integrated photoluminescence (PL) intensity of (0001) MQWs increases significantly from 15 to 70 K. These results are explained by carrier injection from (\( 1\overline{1}01 \)) to (0001) MQWs due to thermal excitation. It is therefore concluded that the emission efficiency of (0001) MQWs at high temperatures can be greatly improved due to the formation of semi-polar MQWs. Recently, GaN based light-emitting diodes (LEDs) with InGaN/GaN multi-quantum wells (MQWs) as the active region is being widely used in the field of solid-state semiconductor lighting (SSL). However, typical InGaN/GaN MQW LEDs grown on (0001) sapphire substrate suffer from high threading dislocations density (108/cm2–1010/cm2) caused by mismatch in lattice constants and in thermal expansion coefficients between the GaN-based materials and the sapphire substrate [1, 2] and strong quantum-confined Stark effect (QCSE) due to the large internal electric field originated from both spontaneous polarization, induced by wurtzite crystal structure of GaN materials, and piezoelectric polarization induced by strain effect [3–6]. They both can cause a heavy "efficiency droop" [7]. To overcome these problems, there are many methods have been reported to enhance the light emission efficiency, such as using GaN substrate [8] and lateral epitaxial overgrowth [9]. These methods, however, cannot be widely used because of the high cost. Consequently, semi-polar (\( 1\overline{1}01 \)) quantum well (QW) grown in the sidewall of the V-shaped pits is attracting attentions [10–12] because, unlike conventional (0001) QW, it has very weak QCSE due to very small internal electric field. Additionally, the In incorporation efficiency has been proposed to be higher in semi-polar InGaN/GaN MQWs than in polar InGaN/GaN MQWs [13]. Therefore, the semi-polar InGaN/GaN MQWs are beneficial for fabricating high-efficiency GaN-based optoelectronic devices. The promising application of semi-polar InGaN/GaN MQWs has inspired more and more research teams to study its emission properties. It has been reported that growing semi-polar InGaN/GaN MQWs in V-shaped pits could increase the light emission efficiency by A. Hangleiter et al. [10] and S. H. Han et al. [11]. The reason has been ascribed to an energy barrier around the V-shaped pit which keeps carriers from reaching the dislocation and recombining nonradiatively. In this paper, we also report an improvement of light emission efficiency in the (0001) InGaN/GaN MQWs sample with forming semi-polar (\( 1\overline{1}01 \)) InGaN/GaN MQWs in the sidewall of the V-shaped pits. An important phenomenon is observed here to explain why the light emission efficiency of the sample could be enhanced through forming semi-polar InGaN/GaN MQWs, which is proposed for the first time. The InGaN/GaN MQW sample was epitaxially grown on a (0001)-oriented sapphire substrate by metal organic chemical vapor deposition (MOCVD) with forming a lot of V-shaped pits simultaneously under special growth conditions. Trimethylgallium (TMGa), trimethylindium (TMIn), and ammonia (NH3) were used as precursors for Ga, In, and N, while silane (SiH4) and biscyclopentadienyl magnesium (CP2Mg) were used as n-type and p-type dopants, respectively. The epitaxial structure includes 2 μm un-doped-GaN layer, 2 μm n-GaN layer, 500 nm n-In0.05Ga0.95N layer, five-pairs In0.23Ga0.77N/GaN MQWs containing polar InGaN/GaN MQWs in (0001) plane and semi-polar InGaN/GaN MQWs in (\( 1\overline{1}01 \)) plane of the V-shaped pits, and 250 nm p-InGaN layer. The epitaxial structure of the sample is shown in Fig. 1. Schematic illustration of the sample The sample's microstructure properties are measured by atomic force microscopy (AFM) and transmission electron microscopy (TEM). The electron and hole quantized energy levels and wave functions of the two kinds of QWs are calculated by using finite difference method. Temperature-dependent cathodoluminescence (CL) and photoluminescence (PL) experiments are used to measure the sample's light emission properties. The CL experiments are performed with the incident 5-keV high-energy electron beam focused on the V-shaped pit over a temperature range from 15 to 200 K. The luminescence signal is collected by a parabolic mirror, then dispersed by a monochromator, and detected by a charge-coupled device (CCD). The PL measurements were performed with the samples held in a helium closed-circuit refrigerator over a temperature range from 15 to 300 K. The PL signal is also dispersed by a monochromator and detected by a CDD. A 405-nm continuous wave (CW) semiconductor laser is used as the PL excitation source and large-area excitation. Figure 2a shows the 5 μm × 5 μm AFM image of the surface of the sample; a lot of V-shaped pits are clearly observed. Figure 2b shows the cross-sectional TEM image of the (0001) area. Figure 2c shows the TEM image of the V-shaped pits. A threading dislocation exists in the inside of the V-shaped pit. The QW grown on (0001) plane has a thickness of 4 nm and a period of 17 nm. However, the semi-polar QW in the V-shaped pits grown on (\( 1\overline{1}01 \)) plane exhibits a much thinner thickness than that of c-plane QW which has a thickness of 1.6 nm and a period of 6.9 nm. It is because that the atomic adhesion is different between the (0001) plane and (\( 1\overline{1}01 \)) plane, which leading to the faster growth rate on (0001) surface than on (\( 1\overline{1}01 \)) surface. The structure of a period semi-polar QW and (0001) QW is also shown schematically in Fig. 2d, e. The angle between (0001) plane and (\( 1\overline{1}01 \)) plane is θ. a 5 μm×5 μm AFM image. b Cross-sectional TEM of (0001) area. c TEM image of V-shaped pit. d Schematic illustration of a period semi-polar QW and (0001) QW. e Schematic illustration of (0001) plane and (1\( \overline{1} \)01) plane We here calculate the electron and hole quantized energy levels and wave functions of the two kinds of QWs by using finite difference method based on the Schrodinger's equation: $$ -\frac{\hslash^2}{2{m}^{*}}\cdot \frac{d^2}{d{x}^2}\psi (x)+V(x)\cdot \psi (x)={E}_n\cdot \psi (x) $$ where ћ is the reduced Planck constant, m* is the electron or hole effective mass, V(x) is the energy band, ψ(x) is the wave function, E n is the energy level. To calculate ψ(x) and E n , we must know V(x). And, V(x) is influenced by electrostatic field in QW, so we should calculate the electrostatic field first. For the periodic MQW structure, the change values of electric potential of well layer and barrier layer in each period are the same, just opposite in sign. The electrostatic field of one period QW grown by materials of thickness L and dielectric constants ε can be expressed as [14, 15]: $$ \begin{array}{l}{E}_b=\frac{L_w\left({P}_w-{P}_b\right)}{\varepsilon_0\left({\varepsilon}_w{L}_b+{\varepsilon}_b{L}_w\right)}\\ {}\end{array} $$ $$ {E}_w=\frac{L_b\left({P}_b-{P}_w\right)}{\varepsilon_0\left({\varepsilon}_w{L}_b+{\varepsilon}_b{L}_w\right)} $$ where the subscripts w and b represent the well layer and the barrier layer, respectively. ε 0 is the vacuum dielectric constant, and P is the total polarization including spontaneous and piezoelectric. The spontaneous polarization P SP and piezoelectric polarization P PZ can be calculated by [16–18]: $$ {P}_{\mathrm{SP}}^{(0001)}\left({\mathrm{In}}_x{\mathrm{Ga}}_{1-x}N\right)=-0.042x-0.034\left(1-x\right)+0.037x\left(1-x\right) $$ $$ {P}_{\mathrm{PZ}}^{(0001)}\left({\mathrm{In}}_x{\mathrm{Ga}}_{1-x}N\right)=0.148x-0.0424\left(1-x\right) $$ $$ {P}_{\mathrm{SP}}^{\left(1\overline{1}01\right)}={P}_{\mathrm{SP}}^{(0001)}\cdot \cos \theta $$ $$ \begin{array}{l}{P}_{\mathrm{PZ}}^{\left(1\overline{1}01\right)}={e}_{31}\cdot \cos \theta \cdot {\varepsilon}_{x^{,}{x}^{,}}+\left({e}_{31}\cdot { \cos}^3\theta +\frac{e_{33}-{e}_{15}}{2}\cdot \sin \theta \cdot \sin 2\theta \right)\cdot {\varepsilon}_{y^{,}{y}^{,}}\\ {}+\left(\frac{e_{33}+{e}_{15}}{2}\cdot \sin \theta \cdot \sin 2\theta +{e}_{33}\cdot { \cos}^3\theta \right)\cdot {\varepsilon}_{z^{,}{z}^{,}}\\ {}+\left[\left({e}_{31}-{e}_{33}\right)\cdot \cos \theta \cdot \sin 2\theta +{e}_{15}\cdot \sin \theta \cdot \cos 2\theta \right]\cdot {\varepsilon}_{y^{,}{z}^{,}}\end{array} $$ where e 31 , e 33 , and e 15 are piezoelectric tensor, ε x'x' , ε y'y' , ε z'z' , and ε y'z' are elastic strain, and x is the In content. The material parameters of InGaN are obtained from Vegard's law [17] by using the parameters of wurtzite GaN and InN which are listed in Table 1. The ratio of conduction band discontinuity to valence band discontinuity is assumed to be 7:3 at the interfaces of InGaN/GaN. Table 1 Material parameters of GaN and InN Figure 3 shows the calculated electron and hole quantized energy levels and wave functions together with the energy band structure. The light-emitting wavelength of (0001) QW and (\( 1\overline{1}01 \)) semi-polar QW is calculated to be 585 and 438 nm, respectively. The emitting photon energy of (\( 1\overline{1}01 \)) QW is 355.8 meV higher than that of (0001) QW. This energy barrier is much larger than the thermal excitation energy caused by increasing temperature. So, the carriers in (0001) QW cannot nonradiatively [10, 11]. At the same time, we suspect that the carriers in semi-polar QW are easy to transfer into (0001) QW to recombine and shine. We also observe that the energy band of (0001) QW is more tilted than that of (\( 1\overline{1}01 \)) QW. It is because the polarization intensity of (0001) QW is much larger than (\( 1\overline{1}01 \)) QW's. So, the QCSE of (0001) QW is very strong, while that of (\( 1\overline{1}01 \)) QW is very weak. Calculated electron and hole quantized energy levels and wave functions together with energy band structure The CL spectra of the sample at different temperatures from 15 to 200 K are shown in Fig. 4a. It can be seen that the CL spectra contain three light-emitting peaks, 380, 440, and 585 nm. The peak of 380 nm is emitted by the p-InGaN, which is the surface of the sample. The peaks of 440 and 585 nm are emitted by the (\( 1\overline{1}01 \)) and (0001) QW, respectively. They are in accordance with the calculated values. In CL measurement, the incident high-energy electron beam is just focused on the V-shaped pit, so the (0001) QW should not be excited in principle. However, we indeed observe the luminescence of (0001) QW from the CL spectra. At the same time, we also find that the peak of 585 nm emitted by (0001) QW is very weak and even could be negligible at the low temperature of 15 K. As the temperature increase, however, the ratio of the intensity of 585-nm peak to the intensity of 440-nm peak increases gradually. It is known that, due to In composition fluctuation, there are lots of localization centers in a common InGaN QW [19]. With increasing temperature, the localized carriers will be thermally activated to escape from the localization centers and become free in the well layer. This is considered to be true even for (\( 1\overline{1}01 \)) QW. At higher temperatures, the carrier localized in the localization centers gains enough energy to escape and become free within the (\( 1\overline{1}01 \)) QW, which causes increase of carriers transporting from (\( 1\overline{1}01 \)) QW to (0001) QW. a CL spectra of the sample from 15 to 200 K. b Dependence of CL integrated intensity on temperature. The electron beam was focused on one V-shaped pit The integrated CL intensity versus temperature is plotted in Fig. 4b. The black curve is the CL integrated intensity of p-InGaN. It decreases with the temperature rising due to the capture of carriers by threading dislocation. The red curve is the CL integrated intensity of semi-polar QW. It decreases much significantly than that of p-InGaN. This is because that the carriers in semi-polar QW not only can be captured by threading dislocation but also can transfer into (0001) QW. The CL integrated intensity of (0001) QW is plotted by the blue curve. It first increases and then decreases. The increase is due to transferring of more carriers from semi-polar QW into (0001) QW with elevating temperature. At even higher temperatures, however, the nonradiative recombination [20] gradually plays a leading role, resulting in the followed decrease of the CL integrated intensity of (0001) QW. This confirms our speculation again. The PL spectra of the sample over a temperature range from 15 to 300 K are plotted in Fig. 5a. It can be seen that the PL spectra only contain the luminescence of (0001) QW. In principle, the laser spot is so big that it could cover the (0001) plane area and V-shaped pits area. So, the semi-polar QW also can be excited. But the PL spectra do not contain the light-emitting peak of semi-polar QW, which is different from the CL spectra. The reason is tentatively attributed to the fact that the excitation sources are different. The density of carriers excited by high-energy electron beam in CL measurement is much higher than that aroused by a 405-nm laser in PL measurement. In addition, carriers in semi-polar QW may transfer into (0001) QW and/or be captured by dislocations. Therefore, no PL emission from semi-polar QW was observed. In CL experiments, the carrier density is so high that, apart from transferring and capture, there are still enough carriers to combine radiatively. a PL spectra of the sample from 15 to 300 K. b The PL integrated intensity versus temperature The PL integrated intensity versus temperature is plotted in Fig. 5b. We can find that it increases significantly up to 70 K and then gradually decreases. The increase of PL integrated intensity up to 70 K further confirms that the localized carriers, caused by composition fluctuations, in semi-polar QW are thermally activated and transfer into (0001) QW to recombine. When further increasing temperature, it begins to decrease due to enhanced nonradiative recombination. In summary, we have studied the emission properties of semi-polar InGaN/GaN MQWs in the V-shaped pits. The light-emitting energy level of semi-polar QW is 355.8 meV higher than that of (0001) QW, which prevents carriers in (0001) QW from reaching the dislocations in V-shaped pits. Both the integrated CL intensity and the integrated PL intensity of (0001) QW increase first and then decrease with rising in temperature, demonstrating that the localized carriers by composition fluctuations in semi-polar QW are thermally activated and transfer into (0001) QW with heating up. This leads to the improvement of light emission efficiency of the (0001) InGaN/GaN MQWs. It is clear that the optical properties of (0001) QW can be improved through the formation of semi-polar QW, which is beneficial in fabrication of high-efficiency GaN-based optoelectronic devices. AFM: charge-coupled device CL: cathodoluminescence CW: continuous wave light-emitting diodes MOCVD: metal organic chemical vapor deposition MQW: multi-quantum well QCSE: quantum-confined Stark effect SSL: solid-state semiconductor lighting TEM: Lester SD, Ponce FA, Craford MG, Steigerwald DA (1995) High dislocation densities in high efficiency GaN-based light-emitting diodes. Appl Phys Lett 66:1249–1251 Narayanan V, Lorenz K, Wook K, Mahajan S (2001) Origins of threading dislocations in GaN epitaxial layers grown on sapphire by metalorganic chemical vapor deposition. Appl Phys Lett 78:1544–1546 Yu ET, Sullivan GJ, Asbeck PM et al (1997) Measurement of piezoelectrically induced charge in GaN/AlGaN heterostructure field-effect transistors. Appl Phys Lett 71:2794–2796 Jho YD, Yahng JS, Oh E et al (2001) Measurement of piezoelectric field and tunneling times in strongly biased InGaN/GaN quantum wells. Appl Phys Lett 79:1130–1132 Nardelli MB, Rapcewicz K, Bernholc J (1997) Polarization field effects on the electron–hole recombination dynamics in In0.2Ga0.8N/In1-xGaxN multiple quantum wells. Appl Phys Lett 71:3135–3137 Dupuis RD, Wu ZH, Fischer AM, Ponce FA, Lee W, Ryou JH, Limb J, Yoo D (2007) Effect of internal electrostatic fields in InGaN quantum wells on the properties of green light emitting diodes. Appl Phys Lett 91(041915-1):041915–3 Wang JX, Zhang N, Liu Z, Si Z, Ren P, Wang XD, Feng XX, Dong P, Du CX, Zhu SX, Fu BL, Lu HX, Li JM (2013) Reduction of efficiency droop and modification of polarization fields of InGaN-based green light-emitting diodes via Mg-doping in the barriers. Chin Phys Lett 30(087101-1):087101–3 Tsai CC, Chang CS, Chen TY (2002) Low-etch-pit-density GaN substrates by regrowth on free-standing GaN films. Appl Phys Lett 80:3718–3720 Nam OH, Bremser MD, Zheleva TS, Davis RF (1997) Lateral epitaxy of low defect density GaN layers via organometallic vapor phase epitaxy. Appl Phys Lett 71:2638–2640 Hinze P, Hangleiter A, Hitzel F, Netzel C, Fuhrmann D, Rossow U, Ade G (2005) Suppression of nonradiative recombination by V-shaped pits in GaInN/GaN quantum wells produces a large increase in the light emission efficiency. Phys Rev Lett 95(127402-1):127402–-4 Kim ST, Han SH, Lee DY, Shim HW, Lee JW, Kim DJ, Yoon S, Kim YS (2013) Improvement of efficiency and electrical properties using intentionally formed V-shaped pits in InGaN/GaN multiple quantum well light-emitting diodes. Appl Phys Lett 102(251123-1):251123-–4 Jiang FY, Wu XM, Liu JL, Quan ZJ, Xiong CB, Zheng CD, Zhang JL, Mao QH (2014) Electroluminescence from the sidewall quantum wells in the V-shaped pits of InGaN light emitting diodes. Appl Phys Lett 104(221101-1):221101–5 Kneissl M, Wernicke T, Schade L, Netzel C, Rass J, Hoffmann V, Ploch S, Knauer A, Weyers M, Schwarz U (2012) Indium incorporation and emission wavelength of polar, nonpolar and semipolar InGaN quantum wells. Semicond Sci Technol 27(024014-1):024014–7 Bemardini F, Fiorentini V (2000) Polarization fields in nitride nanostructures: 10 points to think about. Appl Surf Sci 166:23–29 Weng GE, Zhang BP, Liang MM, Lv XQ, Zhang JY, Ying LY, Qiu ZR, Yaguchi H, Kuboya S, Onabe K, Chen SQ, Akiyama H (2013) Optical properties and carrier dynamics in asymmetric coupled InGaN multiple quantum wells. Funct Mater Lett 6(1350021-1):1350021–5 Tansu N, Arif RA, Ee YK (2007) Polarization engineering via staggered InGaN quantum wells for radiative efficiency enhancement of light emitting diodes. Appl Phys Lett 91(091110-1):091110–3 Ambacher O, Majewski J, Miskys C, Link A, Hermann M, Eickhoff M, Stutzmann M, Bernardini F, Fiorentini V, Tilak V, Schaff B, Eastman LF (2002) Pyroelectric properties of Al(In)GaN/GaN hetero- and quantum well structures. J Phys Condens Matter 14:3399–3434 Speck JS, Romanov AE, Baker TJ, Nakamura S (2006) Strain-induced polarization in wurtzite III-nitride semipolar layers. J Appl Phys 100(023522-1):023522–10 Wang H, Ji Z, Qu S, Wang G, Jiang Y, Liu B, Xu X, Mino H (2012) Influence of excitation power and temperature on photoluminescence in InGaN/GaN multiple quantum wells. Opt Express 20:3932–3940 Hinckley J, Zhang M, Bhattacharya P, Singh J (2009) Direct measurement of auger recombination in In0.1Ga0.9N/GaN quantum wells and its impact on the efficiency of In0.1Ga0.9N/GaN multiple quantum well light emitting diodes. Appl Phys Lett 95(201108-1):201108–3 This work was supported by the National Natural Science Foundation of China (Grant Nos. 61274052, 11474235), the Fundamental Research Funds for the Central Universities (Grant No. 2013121024), the Key Lab of Nanodevices and Nanoapplications, Suzhou Institute of Nano-Tech and Nano-Bionics of Chinese Academy of Sciences (Grant No. 14ZS02), and the Opened Fund of the State Key Laboratory on Integrated Optoelectronics No. IOSKL2014KF08. Department of Electronic Engineering, Xiamen University, Xiamen, 361005, People's Republic of China Wan-Ru Zhao, Jiang-Yong Zhang & Bao-Ping Zhang Department of Physics, Xiamen University, Xiamen, 361005, People's Republic of China Guo-En Weng School of Electronic Science and Engineering, Nanjing University, Nanjing, 210093, People's Republic of China Jian-Yu Wang School of Physics and Optoelectronic Engineering, Dalian University of Technology, Dalian, 116024, People's Republic of China Hong-Wei Liang World Premier International (WPI) Center for Materials Nanoarchitectonics (MANA), National Institute for Materials Science (NIMS), Namiki 1-1, Tsukuba, Ibaraki, 305-0044, Japan Takashi Sekiguchi Wan-Ru Zhao Jiang-Yong Zhang Bao-Ping Zhang Correspondence to Hong-Wei Liang or Bao-Ping Zhang. The work presented here was carried out in collaboration among all authors. BPZ designed the study. WRZ drafted the manuscript. HWL achieved the growth of the semi-polar InGaN/GaN MQW sample. WRZ, JYW, and TS carried out the temperature-dependent CL and PL measurements. WRZ, GEW, and JYZ analyzed the data and discussed the analysis. All authors read and approved the final manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Zhao, WR., Weng, GE., Wang, JY. et al. Enhanced Light Emission due to Formation of Semi-polar InGaN/GaN Multi-quantum Wells. Nanoscale Res Lett 10, 459 (2015). https://doi.org/10.1186/s11671-015-1171-1 Accepted: 22 November 2015 Semi-polar InGaN/GaN multi-quantum wells
CommonCrawl
Showing inequality for harmonic series. I want to show that $$\log N<\sum_{n=1}^{N}\frac{1}{n}<1+\log N.$$ But I don't know how to show this. real-analysis sequences-and-series inequality harmonic-numbers KnsKns $\begingroup$ Hint: Use left/right hand approximations to $\int_0^N\frac{1}{x}\,dx$ $\endgroup$ – Arturo Magidin Jun 10 '12 at 2:55 $\begingroup$ Is it $\int_{0}^{N}\frac{1}{x} dx$ or $\int_{1}^{N}\frac{1}{x} dx$? $\endgroup$ – Kns Jun 10 '12 at 2:57 $\begingroup$ @Arturo Magidin What is the meaning of left/right hand approximation? $\endgroup$ – Kns Jun 10 '12 at 3:01 $\begingroup$ Yes, sorry, from $1$ to $N$. "Left hand approximation" means you use the value of the function on the left end of the subinterval to estimate the area, and "right hand approximation" means you use the value at the right end of the subinterval. $\endgroup$ – Arturo Magidin Jun 10 '12 at 3:05 I think I wrote this up somewhere on this website but anyways here she is again. From the figure, you can see that the area under the blue-curve is bounded below by the area under the red-curve from $1$ to $\infty$. The blue-curve takes the value $\frac1{k}$ over an interval $[k,k+1)$ The red-curve is given by $f(x) = \frac1{x}$ where $x \in [1,\infty)$ The green-curve takes the value $\frac1{k+1}$ over an interval $[k,k+1)$ The area under the blue-curve represents the sum $\displaystyle \sum_{k=1}^{n} \frac1{k}$ while the area under the red-curve is given by the integral $\displaystyle \int_{1}^{n+1} \frac{dx}{x}$ while the area under the blue-curve represents the sum $\displaystyle \sum_{k=1}^{n} \frac1{k+1}$ Hence, we get $\displaystyle \sum_{k=1}^{n} \frac1{k} > \displaystyle \int_{1}^{n+1} \frac{dx}{x} = \log(n+1)$ $\log(n+1)$ diverges as $n \rightarrow \infty$ and hence $$\lim_{n \rightarrow \infty} \displaystyle \sum_{k=1}^{n} \frac1{k} = + \infty$$ By a similar argument, by comparing the areas under the red curve and the green curve, we get $$\displaystyle \sum_{k=1}^{n} \frac1{k+1} < \displaystyle \int_{1}^{n+1} \frac{dx}{x} = \log(n+1)$$ and hence we can bound $\displaystyle \sum_{k=1}^{n} \frac1{k}$ from above by $1 + \log(n+1)$ Hence, $\forall n$, we have $$\log(n+1) < \displaystyle \sum_{k=1}^{n} \frac1{k} < 1 + \log(n+1)$$ Hence, we get $0 < \displaystyle \sum_{k=1}^{n} \frac1{k} - \log(n+1) < 1$, $\forall n$ Hence, if $a_n = \displaystyle \sum_{k=1}^{n} \frac1{k} - \log(n+1)$ we have that $a_n$ is a monotonically increasing sequence and is bounded. Hence, $\displaystyle \lim_{n \rightarrow \infty} a_n$ exists. This limit is denoted by $\gamma$ and is called the Euler-Mascheroni constant. It is not had to show that $\gamma \in (0.5,0.6)$ by looking at the difference in the area of these graphs and summing up the area of these approximate triangles. It is a good excercise to show that $$\tag 1 \frac{x}{{x + 1}} \leqslant \log \left( {1 + x} \right) \leqslant x$$ One alternative is expanding $\log$ into powers of $x$ and $\dfrac{x}{x+1}$, which gives $$\eqalign{ & \log \left( {1 + x} \right) = x - \frac{{{x^2}}}{2} + \frac{{{x^3}}}{3} - \frac{{{x^4}}}{4} +- \cdots \cr & \log \left( {1 + x} \right) = \frac{x}{{x + 1}} + \frac{1}{2}{\left( {\frac{x}{{x + 1}}} \right)^2} + \frac{1}{3}{\left( {\frac{x}{{x + 1}}} \right)^3} + \frac{1}{4}{\left( {\frac{x}{{x + 1}}} \right)^4} + \cdots \cr} $$ From there we immediately get $$\frac{x}{{x + 1}} \leqslant \log \left( {1 + x} \right) \leqslant x$$ Now let $x=\dfrac 1 N$ $$\eqalign{ & \frac{{\frac{1}{N}}}{{\frac{1}{N} + 1}}\leq\log \left( {1 + \frac{1}{N}} \right)\leq\frac{1}{N} \cr & \frac{1}{{N + 1}}\leq\log \left( {\frac{{N + 1}}{N}} \right)\leq\frac{1}{N} \cr & \frac{1}{{N + 1}}\leq\log \left( {N + 1} \right) - \log N\leq\frac{1}{N} \cr} $$ Now sum from $N=1$ to $N=M$ $$\sum\limits_{N = 1}^M {\frac{1}{{N + 1}}} \leq\log \left( {M + 1} \right)\leq\sum\limits_{N = 1}^M {\frac{1}{N}} $$ $$\sum\limits_{N = 1}^{M + 1} {\frac{1}{N}} - 1 \leqslant \log \left( {M + 1} \right) \leqslant \sum\limits_{N = 1}^M {\frac{1}{N}} $$ This gives $$\sum\limits_{N = 1}^{M + 1} {\frac{1}{N}} \leqslant \log \left( {M + 1} \right) + 1$$ $$\log \left( {M + 1} \right) \leqslant \sum\limits_{N = 1}^M {\frac{1}{N}} \leqslant \sum\limits_{N = 1}^{M + 1} {\frac{1}{N}} $$ Another way to prove $(1)$ is to start from the alternative definition: $$\log x = \mathop {\lim }\limits_{k \to +\infty} k({{x^{1/k}} - 1})$$ The note that, for $0<y<1$ and $y>1$ respectively $$\eqalign{ & \sum\limits_{v = 0}^{k - 1} {{y^v}} \leqslant \sum\limits_{v = 0}^{k - 1} {1 = k} \cr & \sum\limits_{v = 0}^{k - 1} {{y^v}} \geqslant \sum\limits_{v = 0}^{k - 1} {1 = k} \cr} $$ Thus for $y > 0$ $${y^k} - 1 = \left( {y - 1} \right)\sum\limits_{v = 0}^{k - 1} {{y^v}} \geqslant k\left( {y - 1} \right)$$ We now let $y^k =x$ and we get $$x - 1 = \left( {y - 1} \right)\sum\limits_{v = 0}^{k - 1} {{y^v}} \geqslant k\left( {{x^{1/k}} - 1} \right)$$ From this is it trivial to get $$\log x \geqslant 1 - \frac{1}{x}$$ $$\log x = - \log \frac{1}{x} \geqslant - \left( {\frac{1}{x} - 1} \right) = 1 - \frac{1}{x}$$ This last exposition is due to Edmund Landau. Pedro Tamaroff♦Pedro Tamaroff $\begingroup$ How did u write $log ( 1+x)$ in terms of $x/(1+x)$ ? $\endgroup$ – Theorem Jun 10 '12 at 7:40 $\begingroup$ The easiest way to to use $$-\log(1-x) = x+\frac{x^2}{2}+\frac{x^3}{3}+\cdots$$ and let $x=\dfrac{x'}{x'+1}$. However, I proved the expansion differently. $\endgroup$ – Pedro Tamaroff♦ Jun 10 '12 at 18:29 Look at this picture, ignoring everything to the right of $x=6$. The shaded area is $\sum\limits_{n=1}^5\dfrac1n$, and it's clearly larger than the area under the curve $y=\dfrac1x$ between $1$ and $5$, which is $\int_1^5\frac{dx}x\,dx=\ln 5$. Now shift the shaded region one unit to the left. The first shaded rectangle has an area of $1$, and the remaining four shaded rectangles fit under the curve $y=\dfrac1x$ between $1$ and $5$. The total area of those four rectangles is therefore less than $\int_1^5\frac{dx}x\,dx=\ln 5$, so the total area of all five rectangles is less than $1+\ln 5$. Now generalize. Brian M. ScottBrian M. Scott A simpler way is to use one of the consequences of the Lagrange's theorem applied on $\ln(x)$ function, namely: $$\frac{1}{k+1} < \ln(k+1)-\ln(k)<\frac{1}{k} \space , \space k\in\mathbb{N} ,\space k>0$$ Then take $k=1,2,...,n$ values to the inequality and add them up. The proof is complete. user 1357113user 1357113 Not the answer you're looking for? Browse other questions tagged real-analysis sequences-and-series inequality harmonic-numbers or ask your own question. Show that $45<x_{1000}<45.1$ Prove that $2 < e < 4$ using upper and lower Riemann sums and the definition of $\ln{x}$ Explain this inequality, related to logarithms $ \frac{1}{2} + \dots + \frac{1}{n} \le \log n $ How to show $c_n=\frac11 + \frac12 + \cdots + \frac1n - \ln n$ is a sequence of positive numbers? Prove that $s_n \leq 1+\ln n$, where $s_n$ is the $n$th partial sum of the harmonic series Harmonic series sum approximation How to prove that for $n \in \mathbb{N}$ we have $\sum_{k=2}^n \frac{1}{k}\leq \ln(n) \leq \sum_{k=1}^{n-1} \frac{1}{k}$ To prove that $\sum_{k=1}^{n}{{1}\over {k}} \ge \log n$ Showing that $\log(\log(N+1)) \leq 1+\sum\limits_{p \leq N} \frac{1}{p}$ A series involving inverses of harmonic numbers Series convergence issues A series involving harmonic numbers Inequality for Difference Between Two Harmonic Numbers Showing that Harmonic numbers are $\Theta(\log n)$, intuitively Infinite series, injective function and rearrangement inequality Prove series form of fractional harmonic numbers showing simple inequality Showing convergence of an infinite series?
CommonCrawl
Crop diversification analysis on red pepper dominated smallholder farming system: evidence from northwest Ethiopia Abebe Birara Dessie1, Tadie Mirie Abate1, Taye Melese Mekie1 & Yigrem Mengist Liyew2 Ethiopia is the homeland of various crops due to its diverse and suitable agro-ecological zones. As a result, smallholder farmers grow multiple crops on a small piece of land both for consumption and commercial purposes in different portions of Ethiopia, including the northwestern part of the country. However, crop diversification status and extent of farmers were not well understood. Therefore, this study examined determinants of crop diversification in a pepper-dominated smallholder farming system in northwest Ethiopia. Primary data was collected through a semi-structured interview schedule administered on 385 crop producers selected using a systematic random sampling technique. Moreover, the survey was supplemented by using secondary data, focus group discussions, and key informant interviews. Methods such as the descriptive, inferential statistics, and econometrics model were used for analyzing the data. The average crop diversification index was 0.77, and most smallholder farmers (92.46%) used crop diversification as a strategy for risk reduction, nutritional improvement, consumption, and commercial needs. Moreover, the Tobit model result revealed that the status and intensity of crop diversification were significantly influenced by farmland, sex, age, land fragmentation, distance to development center, market distance, and non-/off-farm income participation. Generally, most farm households used crop diversification as a norm and best strategy for minimizing risk, income source, nutritional and livelihood improvement. Therefore, crop producers, agricultural experts, the Ethiopian government, and partner organizations should give special attention to extension service, market, and infrastructure development to enhance the role of agricultural diversification for households. In most African countries, agricultural sectors are highly dependent on rain and influenced by climate change and variability such as seasonal dynamics, drought, high temperatures, very low humidity and precipitation. Consequently, low crop yields, lessening soil fertility, high environmental degradation, and augmented agricultural risks are some of the key challenges, which continue to threaten household's food security status (Makate et al. 2016). In many developing countries including Ethiopia, most smallholder farmers struggle to attain nutritional and food security and poverty alleviation through agricultural diversification (FAO 2012; Michler and Josephson 2017). Hence, Johnston et al. (1995) and Mussema et al. (2015) defined agricultural diversification as the way of farmers growing more than one crop on a given piece of land in any year to reduce vulnerability, marketing risks, and income and biological instability. Diversification is common in every society (Barrett et al. 2001); however, its extent and effect vary from region to region and household to household within the same area (Escobal, 2001). The increasing risks of crop failure due to erratic rainfall and crop disease continue to force farmers to diversify their enterprise as a hedge against these risks (Acharya et al. 2012). An increment of agricultural diversification also decreased the likelihood of poor households to become below the poverty line and increased the likelihood of non-poor households to fall into poverty (Michler and Josephson 2017). Therefore, crop diversification plays a vital role in a farmer's decision-making process so as to minimize the risk of agricultural production (Davis and Schirmer 1987). Many studies also indicated that agricultural diversification has a multiple advantages for most smallholder farmers and the functioning of ecosystems by mitigation of agricultural losses to pests and wildlife (Bommarco et al. 2013; Chaplin et al. 2011) and improving soil fertility (Lin 2011; McDaniel et al. 2014; Tiemann et al. 2015) and biodiversity (Schulte et al. 2017; Tscharntke et al. 2005), and brings about yield stability and nutrition diversity (Lin 2011). Acharya (2011) also asserted that agricultural diversification has a great role in striving for food security. Likewise, agricultural diversification plays a vital role in economic growth by enhancing productivity, household incomes, improving soil health (through crop rotation and nitrogen fixation), and sustainable intensification of agriculture (Mussema et al. 2015). Michler and Josephson (2017) revealed that crop diversification is the best strategy for households as a source of income, risk reduction, and poverty alleviation. Crop diversification can also increase absolute yields and yield stability for a number of crops and thus increase household income (Abson et al. 2013; Demissie and Legesse 2013; Makate et al. 2016; Njeru 2016). However, Burchfield and de la Poterie (2018) found that many farmers are not willing and able to diversify an enterprise because of the nature of their fields, elevation, soil quality, irrigation infrastructure, and relative position within an irrigation system. Likewise, the unsuitability of the local environmental condition is the main factor which prevented farmers from crop diversification (McDaniel et al. 2014). Moreover, Ashfaq et al. (2008) revealed that crop diversification levels were influenced by various socio-economic and institutional factors such as land size, age, education, farming experience, and off-farm income of the farmer, the distance of the farm from the main road and from the main market, and farm machinery ownership. Nuru and Seebens (2008) also found that proximity to a town, access to road, education, liquid wealth, and irrigation access as the significant factors affected crop choices in northern Ethiopia. Furthermore, Mussema et al. (2015) confirmed that crop diversification decisions of the households were significantly influenced by land size, income from sale of grain, walking time to the farm, distance from the district, access to be all-weather, market information, extension service, proportion fertile plots, and number of plots. In Ethiopia, almost all smallholder farmers depend on the rising and growing of different types of enterprises on a given farming land for their nutritional and livelihood improvement. Farmers in general involved in a variety of enterprises, particularly diversifying crops and livestock. Unlike in commercial farming systems, farmers in subsistence farming used crop diversification as methods for reducing vulnerability, marketing risks, income instability, and food insecurity by cultivating varieties of crops on a given piece of land. Despite agriculture is the main source of livelihood for most Ethiopian smallholder farmers, not much work has been done in northwestern Ethiopia on what factors determine smallholder farmer's decision and extent to diversify their enterprise to maximize their profit and minimize the risk of crop failures. Hence, based on the above statement, the study was intended to empirically answer the following key question: what factors affect the decision and extent of crop diversification by smallholder farmers? The findings of this study can reduce the information gap on agricultural diversification and contributing to work better on production to improve nutrition, income stability, food security, and poverty reduction for most smallholder farmers. Description of study area The study was conducted in North Gondar Zone, Amhara region of Ethiopia. The zone is located in the northwest of Ethiopia and 738 km far from the capital city of the country. The capital city of the zone is Gondar City which located at 12° 35′ 60.00′′ N latitude and 37° 28′ 0.01′′ E longitudes with a mean altitude of 2133 masl. In the zone, the main sources of livelihood for households are crop production, vegetable production, animal production, beekeeping, and spice production particularly pepper, ginger, white, and black cumin. The low land of the zone is dominated by semi-arid natural forests. In the zone, 51% and 49% of the population are men and women, respectively (Dessie et al. 2019; Abate et al. 2019). The survey was done in the 2018 crop season on two large pepper dominated districts specifically, Takusa and Dembia (Fig. 1). Map of study area A combination of quantitative and qualitative data was collected from both primary and secondary sources. Primary data was collected from crop producers through a semi-structured interview and key informant's interviews. Moreover, to enrich the investigation, secondary data was collected from records of administrative offices, published and unpublished reports, journals, books, websites, and other sources relevant in this study. The interview schedule which consists of semi-structured questions was prepared in English and translated into local language to collect information on socio-economic, demographic and institutional characteristics of households. Furthermore, it was pre-tested, and the necessary amendments were made before the actual survey. The semi-structured interviews were administered on 385 sample producers, which was the main source of a data-collection tool in the research work. Sampling design In order to select the sampled respondents, a multi-stage sampling technique was employed. In the first-stage, Takusa and Dembia Districts were selected purposively due to high potentials of diversified crop production both for consumption and commercial purpose. In the second stage, eight kebeles/villages with largest crop diversification, namely, Mekonta, Chemera, Banbaro, Deber-zuria, Guramba Michael, Arebia, Achera, and Gebaba-salge were purposively selected in consultation with District Agriculture office experts due to the high potentials of crop production and best smallholder farming experience in crop diversification. In the third stage, 385 sampled crop producers were selected by using simple random sampling technique following a scientific sample size determination formula developed by (Cochran 1977). $$ \left[\frac{Z^2 pq}{e^2}\right]=\frac{1.96^2\left(0.5\ast 0.5\right)}{0.05^2}=385 $$ Where n = sample size; Z = confidence level (Z = 1.96); p = 0.5, q = 1-p and e = 0.05 (error term). In order to effectively handle and analyze the diverse data collected from the field and producers, a combination of descriptive statistics, inferential statistics, and econometric models was used. Chi-square test was used to assess the association of household and farm-related attributes between groups (diversifier vs. non-diversifier). The t test was also used to assess mean differences between crop diversifier and non-diversifier and continues explanatory variables. Furthermore, to investigate the determinants of producer's decision and extent of crop diversification, Tobit model was used. Empirical model specification The extent of crop diversification can be determined by using several indices such as Herfindahl index (HI), Simpson's index (SI), Margalef index (MI), Entropy index (EI), Modified entropy index (MEI), Ogive index (OI), Composite entropy index (CEI), and Berger-Parker index (BPI). These indices have been used by many researchers to estimate the extent of crop diversification practices of farmers (Sisay 2016; Mussema et al. 2015; Nuru and Seebens 2008; Mesfin, Fufa, and Haji 2011; Benin et al. 2004; Abay, Bjørnstad, and Smale 2009; Ashfaq et al. 2008; Bazaz and Haq 2013; Bittinger 2010; Greene 2012; Acharya et al. 2012; Goshu, Kassa, and Ketema 2012). However, this study used HI because it is the most commonly used index in many literature of crop diversification (Asante et al. 2018; Kanyua et al. 2013; Sichoongwe et al. 2014; De and Chattopadhyay 2010; Bittinger 2010; Malik and Singh 2002; Sisay 2016). Moreover, Theil (1967) used HI to determine the extent of crop diversification for the first time. Likewise, crop diversification index (CDI) was computed from Herfindahl index to measure the extent of crop diversification for all diversified farmers using a method developed by (Hirschman 1964). Hence, the extent of crop diversification was measured by CDI. The CDI values were obtained by subtracting the HI from 1 and 0 (Eq. 3). Moreover, 0 value of crop diversification index which indicates perfect specialization, and a movement towards 1 shows an increase in the extent of crop diversification (Malik and Singh 2002). Generally, the value of CDI increases with the increase in diversification and assumes 0 value when farmers grow and cultivate only one crop. In this study, the producers basically produce diversified crops such as red pepper, teff, cumin, barley, wheat, sorghum, chickpea, garlic, finger milt, and maize at a time. To compute the Herfindahl index, the authors used the total cropped land (ha) of diversifiers and the proportion of land allocated for growing each crop (ha) in year 2018 harvested season. HI (the sum of squares of all n proportions) and CDI (1-HI) were computed using the formula developed by (Hirschman 1964) in Eqs. 3 and 4. $$ \kern1em {P}_{\mathrm{i}}=\frac{A_{\mathrm{i}}}{\sum \limits_{i=1}^n{A}_{\mathrm{i}}} $$ where, Pi=proportion of ith crop, Ai=Area under ith crop (ha), $$ {\displaystyle \begin{array}{l}\;\\ {}\sum \limits_{i=1}^n{A}_{\mathrm{i}}=\mathrm{Total}\kern0.17em \mathrm{crop}\mathrm{ped}\kern0.17em \mathrm{land}\;\left(\mathrm{ha}\right),\mathrm{and}\;i=\mathrm{1,2,3}\dots \dots \dots .....................n\;\left(\mathrm{number}\kern0.17em \mathrm{of}\kern0.17em \mathrm{crop}\right).\end{array}} $$ $$ \mathrm{Herfinhal}\kern0.17em \mathrm{Index}=\mathrm{HI}=\sum \limits_{i=1}^n{P_{\mathrm{i}}}^2 $$ $$ \mathrm{Crop}\kern0.17em \mathrm{diversification}\ \mathrm{index}=\mathrm{CDI}=1-\mathrm{HI} $$ Crop diversification studies help to determine both factors influencing the household's decision to diversify and the extent of diversification. Rajasekharan and Veeraputhran (2002) revealed that unlike the Probit and Logit model, a Tobit regression model is very appropriate to estimate decision and extent/density of tree growing simultaneously. Ideally, the OLS model is applicable if all households participate in all types of crop diversification, but in reality, even in this study, all households did not participate in all types of crop diversification. Hence, using OLS regression was assumed to create a sample selectivity bias because the model excludes the non-participants from the analysis. Therefore, the use of the Tobit model is appropriate because the parameter estimates will be biased and inconsistent if OLS is used. However, to mitigate this bias, the study initially used Heckman two-stage model which is developed by Heckman (1979). Selection bias was tested by including the IMR which was not significant. Moreover, Tobit model is the most common censored regression model which helps to express the observed level in terms of an underlying latent variable. The use of the Tobit model is intuitive because the parameter estimates will be biased and inconsistent if OLS is used. The degree of bias will also increase as the number of observations that take on the value of 0 increases. The values of the parameter's coefficient from a Tobit model cannot directly give the marginal effects of the explanatory variables on the dependent variable; however, their signs only show the direction of the associations (Gujarati 2012). However, prior to running the Tobit model, the data were estimated for multi-collinearity using the variance inflation factors and contingency coefficient to diagnose this problem and identified no problem. The parameters of the Tobit model were also estimated by using the maximum likelihood method in Stata version 14 software. The Tobit model can be specified following (Tobin 1958; Long 1997; Cameron and Trivedi 2010; Greene 2012). $$ {y}_{i=\beta {X}_i+{\varepsilon}_i} $$ $$ {y}_i=\Big\{{}_{0\kern2.639999em , if\;{y}_i^{\ast}\le 0\kern0.36em \left( Non- diversifiers\right)}^{y_i^{\ast }=\beta {X}_i+{\varepsilon}_i\kern0.96em , if\;{y}_i^{\ast }>0\;(Diversifiers)}\kern0.36em ,i=1,2,3,........,n $$ Socio-economic characteristics of households The total sample size of respondents handled during the survey was 385. Crop diversification levels of farmers depend on various demographic, institutional, and socio-economic factors. This study depicted that more than three-fourth (80.52%) of producers were crop diversifiers, and the other households (19.48%) were non-diversifier. The mean age of diversified producers (48.98 years) was higher than the non-diversifier (45.56 years) (Table 1). The result of t test indicates that age was statistically significant at 5% significance level. This implies that as compared to younger households, older households grow a variety of crops on a given piece of land due to their best farming experience. This finding is in line with previous studies (Asante et al. 2018; Lemi 2009; Enete and Igbokwe 2009). Table 1 Mean and proportion of producers' characteristics by crop diversification In Ethiopia, the mean landholding size of farmer at the country level is 1.02 ha per household (Teshome, 2014). However, mean landholding size of farmer varies across different portions of Ethiopia. For instance, the mean landholding size of household in the study area is 1.76 ha per household. In the study area, the mean farming land of diversifier household (1.86 ha per household) was higher than non-diversifier (1.33 ha per household) (Table 1). The result of t test indicates that total farming land size of farmers was statistically significant at 1% significance level. This means total farming land of households had a direct contribution to growing a variety of crops for livelihood's improvement through income and crop diversification. This result is consistent with Asante et al. 2018; Mussema et al. 2015; Huang et al. 2014; Kanyua et al. 2013; Sichoongwe et al. 2014; Mekuria and Mekonnen 2018, who reported that the practice of crop diversification increased with total farmland. Moreover, 77.92% and 17.40% of male-headed producers were diversifiers and non-diversifiers, respectively. The result of chi-square test shows that sex was statistically significant at 1% significance level. This implies that as compared to female-headed, male-headed households can grow multiple crops on a given piece of land due to inequality of having and accessing of factors of production. This finding confirmed with the results of previous studies (Demissie and Legesse, 2013 and Lemi, 2009). Furthermore, 53.51% and 10.13% of diversifier and non-diversifier were accessed market information, respectively. On the other hand, 27.01% and 9.35% of diversifier and non-diversifiers were not accessed market information, respectively. The result of chi-square test indicates market information was statistically significant at 5% significance level. This implies that market information can decrease the uncertainty of the producers associated with crop diversification. These findings are consistent with previous studies (Mesfin, Fufa, and Haji 2011; Mussema et al. 2015). Household's land allocation for growing diversified crops The survey result presented in Fig. 2 revealed that producers allocated their total land for growing various crops such as red pepper production (19.32%), teff (17.61%), maize (12.5%), chickpea (11.36%), sorghum (9.10%), barley (6.82%), garlic (6.25%), cumin (5.68%), and other crops (11.36%). This implies most producers practiced and used crop diversification as a strategy for risk minimization and income diversification, which in turn reduced food insecurity and poverty status of most rural households. This finding is in line with the previous studies (Lemi 2009; Sisay 2016; Mekuria and Mekonnen 2018). Furthermore, households with large landholding willing and able to grow a diverse set of crops reduce the probability to fall into poverty (Michler and Josephson 2017). Crop distribution in the study area The distribution of crop diversification index The result presented in Fig. 3 indicated that the mean crop diversification index was 0.769 with a standard deviation of 0.142. This implied there are high levels of crop diversification among farmers. Hence, the mean index in this study was almost comparable with the findings of Mekuria and Mekonnen 2018 and Asante et al. 2018 who found 0.57 and 0.59 in highland of Ethiopia and Ghana, respectively. Figure 3 also depicted that the crop diversification index was normally distributed and moderately skewed to the right suggesting that most households were crop diversifier. It also revealed that 92.46% of the households having indices 0.5 and above suggest a high level of crop diversification among farmers. This study, in line with Mekuria and Mekonnen 2018, confirmed that more than three-fourth (79%) of households had an index of 0.5 and above. Moreover, the estimated crop diversification index was statistically significant among diversifiers at 1% significant level. Michler and Josephson (2017) also confirmed that a 10% increment of crop diversification index decreases the probability of being poor by 17.5%. Mean of household's crop diversification index Determinants of crop diversification: estimated through Tobit model The results of the Tobit model are presented in Table 2. The test for multicolinearity revealed that there was no multicollinearity among the explanatory variables. The chi-square of the Tobit regression model indicates that the overall goodness of fit of the model and it was statistically significant at 1% probability level. The model results also indicated that all the significant explanatory variables, which affected the probability of households' decision and extent of crop diversification. The model output suggested that variables such as the size of farmland, sex, age, land fragmentation, distance to development center, market distance, and non-/off-farm income participation were the major factors that significantly affected decision and extent of crop diversification simultaneously. Table 2 Tobit regression result on status and intensity of crop diversification The findings of this study revealed that size of farm landholding affected crop diversification decision and extent of the households positively and significantly at 1% level of significance. As the size of farm landholding increases by one hectare, the probability of a farmer to participate in crop diversification and the numbers of crops a farmer will grow increase by 11% and 3%, respectively. This implies that large farm landholding may allow households to allot their land to grow a variety of cereal crops than smaller farm landholders. This finding, in line with the previous studies, revealed that land size positively and significantly affected crop diversification (Benin et al. 2004; Ashfaq et al. 2008; Abay, Bjørnstad, and Smale 2009; Bonham et al. 2012). This result is also consistent with recent findings (Makate et al. 2016; Mussema et al. 2015; Kanyua et al. 2013; Sichoongwe 2014; Huang et al. 2014) reporting that an increase in the availability of farmland leads farmers to practice crop diversification. Sex of household positively and significantly affected crop diversification decisions and extent at 10% and 1% level of significance, respectively. As compared to female-headed households, in the male-headed households the probability to participate in crop diversification activities and the number of crops a farmer grow increase by 15% and 3%, respectively. This implies male-headed households are risk-takers, hold more resource, and more likely to grow multiple crops for improving their family livelihoods than female-headed. This finding is consistent with the finding of Abay, Bjørnstad, and Smale (2009) who reported that unlike female-headed, male-headed households positively and significantly affected the barley variety diversification in Ethiopia. Asante et al. 2018 also confirmed that male-headed households were more inclined to increase the extent of diversifying than female farmers. Likewise, Demissie and Legesse (2013) stated that due to cultural, religious, and financial constraints, female-headed households had fewer roles in income diversification. Moreover, Lemi (2009) revealed that large dependents and female-headed characterized poor farm households in rural Ethiopia. Furthermore, Shezongo-Macmillan (2005) revealed that as compared to male-headed household, female-headed households were less responded to crop diversification in Zambia due to inequality of having and accessing of resources. However, studies (Rehima and Dawit 2012; Rehima et al. 2013; Sisay 2016) indicated that female-headed households positively affected the probability of crop diversification. Walking distance from residence to the development center is a proxy variable of extension service and significantly and negatively affected crop diversification decision and intensity of farmers at 10% levels of significance. The results showed that as the walking distance to the development centers increases by 1 km, the likelihood of a farmer to participate in crop diversification and the number of crops a farmer will grow decrease by 0.5% and 0.2 %, respectively. This implies farmers who are far away from development centers incurred high transportation costs, poor access of extension advice and input supplies such as improved seeds, fertilizers, and farming tools. As a result, the extent of crop diversification and production for commercial purposes declines. This result is consistent with the previous finding of Sisay (2016) who reported that the walking distance from residence to the development center negatively and significantly affected the decision and extent of crop diversification. Distance from the market has a positive and significant effect on crop diversification at 1% significance level. The results of this study showed that an increase of a 1 km walking distance to the market leads to increase the likelihood of households participating in crop diversification by 1% and the number of crops a farmer will grow also increase by 0.2%. This implies farmers who were far from the market incurred higher transaction costs for getting information, technology, and industrial consumable goods and services. As a result, the household's decision and intensity of crop diversification increase to meet and improve their family consumption and nutritional needs. The result also implies those farmers who are close to urban market tend to allocate more farmland for the production of cash and commercial crops while those who are far away from a market tend to allot much of their farmland for the production of staple (non-cash) crops for consumption and subsistence purpose. This finding, in line with the findings of recent studies (Mussema et al. 2015; Sichoongwe et al. 2014; Ibrahim et al. 2009; Nuru and Seebens 2008), revealed that market distance had a positive and significance effect on household's crop diversification decision and extents because they entirely produced staple crops for the purpose of family consumption. Likewise, Kankwamba, Mapila, and Pauw (2012) stated that farmers located far away from markets are found to diversify crops to meet their wide subsistence and nutritional needs. However, some studies indicated that market distance negatively and significantly influenced crop diversification (Sisay 2016). The age of households was positively and significantly associated with the decision and intensity of crop diversification at 1% significance level. As an additional year added to the age of the household head, the likelihood to participate in crop diversification activities and the number of crops a farmer will grow increased by 0.2% and 0.08%, respectively. This implies older farmers can reduce production adversity by growing multiple crops on a given piece of land, and their decision and intensity of crop diversification were also determined by their past production experience. This result confirmed with the findings of Asante et al. (2018) and Lemi (2009) who revealed that age of households positively and significantly affected crop diversification. Likewise, Enete and Igbokwe (2009) also confirmed that older households were more likely to produce and sell various crops. Land fragmentation affected negatively and significantly households' decision and intensity of crop diversification at 1% significance level. An addition of one plot led to decrease the likelihood of households participating in crop diversification by 6%, and the number of crops a farmer will grow also decreased by 2%. This implies that farmers who operate on a different number of farm plots maintained lower crop diversity, perhaps due to similar soil and agro-ecological zone among plots, which lead to growing similar and high-value crops either for consumption or commercial purposes across different plots. The findings of this study are in line with a recent study of Sisay (2016) who stated that the numbers of fragmented land had a negative and significant effect on crop diversification. However, inconsistent to this finding, some previous studies showed that the number of fragmented land and fragmentation index positively affected agricultural diversification (Abay, Bjørnstad, and Smale 2009; Mussema et al. 2015; Mesfin, Fufa, and Haji 2011; Rehima et al. 2013; Nagarajan, Smale, and Glewwe 2007). The coefficient for household participation in off-/non-farm income activities negatively and significantly affected the probability and intensity of crop diversification at 1% level of significance. In those households that participated in off-/non-farm activities, the likelihood of farmer's participating in crop diversification and the numbers of crops grow by farmers decrease by 7% and 2%, respectively. The result suggests that participation on off-/non-farm income activities provide as a source of income to households and their livelihoods as a result negatively contributed to the practice of crop diversification. However, Asante et al. (2018) revealed that off-farm income had a positive and significant effect on the crop–livestock diversification. Conclusion and recommendations The diversified farming system remains a source of income, risk reduction strategy, and means to improve the livelihoods of households in northwestern Ethiopia. Crop diversification strategy also plays a significant role in households particularly on the source of income and a means for nutritional improvement. It is also used as a mechanism of risk reduction strategy to obtain food and income from multiple crop sources. However, various socio-economic, demographic, and institutional factors influenced the household's decision and intensity of crop diversification. Our results revealed that the majority of smallholder farmers (92%) has a crop diversification index of above 0.5, and the average crop diversification index was 0.769 implying high levels of crop diversification. The results of inferential statistics such as chi-square and t test revealed that various socio-economic parameters such as age, sex, market information, and farmland size were had a significant association and mean difference between groups. Moreover, the result of Tobit regression model indicated that various policy-relevant variables such as age, sex, land size, distance to development center, market distance, and land fragmentation had a significant influence on the status and extent of crop diversification simultaneously. For instance, an older male headed household has more resource and farming experience in crop diversification. Likewise, the smallholder farmer's status and intensity of crop diversification was found to increase with more farm land owned and near market distance. However, it was declined with more land fragmentation and participation in off-/non-farm activities due to similar agro-ecological zone and high contribution of off-/non-farm activities on enhancing household's income source. Given the potential and significant role of crop diversification to improve the livelihood of most smallholder farmers, the following implication has given to the development of the practice of crop diversification strategy. In most developing countries, smallholder farmers provide and supply food for most country citizens. Therefore, the government should consider and undertake policies that will improve smallholder farmers' access to and control over land because it allows farmers to grow multiple crops for the purpose of enhancing food and nutrition security status and poverty reduction. As the farmers who are far away from markets, they are willing and able to diversify crops for nutritional and consumption purposes. If markets are brought closer to farmer, they will diversify crops for commercial purposes. It implies that the ability of households to adopt new agricultural technology such as crop varieties and inorganic fertilizers are related to the market. Hence, the government should facilitate and improve the markets and road infrastructure by bringing closer to the farmers. Moreover, in most developing countries including Ethiopian, the resources are mainly undertaken by male-headed farm households. Therefore, there should be policies that enhance the equitable distribution of resources and involvement of female-headed farm households in crop diversification strategy. Generally, this study provides information on why, how, and what smallholder farmers diversify on agricultural fields. Likewise, the information generated could help a number of organizations including research and development organizations, academicians, traders, producers, policy-makers, extension service providers, government, and non-governmental organizations to assess their activities and redesign their mode of operations and ultimately influence the design and implementation of policies and strategies on agricultural sectors. Further research should be conduct on the impacts of livelihood diversification strategies on household's food security status. The author wants to declare that they can submit the data at any time based on publisher's request. The datasets used and/or analyzed during the current study will be available from the author on reasonable request AE: Adult equivalent BPI: Berger-Parker index CDI: Crop diversification index CEI: Composite entropy index Central Statistical Agency EI: Entropy index FAO: Food and Agricultural Organization Herfindahl index IMR: Inverse Mills Ratio Modified entropy index Margalef index n : OI: Ogive index OLS: Ordinary least square Probability of success: probability of failure Simpson's Index TLU: Tropical Livestock Unit Z: Confidence level Abate TM, Dessie AB, Mekie TM (2019) Technical efficiency of smallholder farmers in red pepper production in North Gondar zone Amhara regional state, Ethiopia. J Econ Struct 8(1):18 Abay F, Bjørnstad A, Smale M (2009) Measuring on farm diversity and determinants of barley diversity in Tigray, northern Ethiopia. Momona Ethiopian J Sci 1(2):44–66 Abson DJ, Fraser EDG, Benton TG (2013) Landscape diversity and the resilience of agricultural returns: a portfolio analysis of land-use patterns and economic returns from lowland agriculture. Agric Food Secur 2(1):2 Acharya SP (2011) Crop Diversification in Karnataka: An Economic Analysis. UAS, Dharwad Acharya SP, Basavaraja H, Kunnal LB, Mahajanashetti SB, Bhat ARS (2012) Growth in area, production and productivity of major crops in Karnataka. Karnataka J Agric Sci 25(4):431–436 Asante BO, Villano RA, Patrick IW, Battese GE (2018) Determinants of farm diversification in integrated crop–livestock farming systems in Ghana. Renewable Agric Food Syst 33(2):131–149 Ashfaq M, Hassan S, Naseer MZ, Baig IA, Asma J (2008) Factors affecting farm diversification in rice–wheat. Pak J Agric Sci 45(3):91–94 Barrett CB, Reardon T, Webb P (2001) Nonfarm income diversification and household livelihood strategies in rural Africa: concepts, dynamics, and policy implications. Food Policy 26(4):315–331 Bazaz NH, Haq I (2013) Crop diversification in Jammu and Kashmir: pace, pattern and determinants. IOSR J Humanities Social Sci 11(5):1–7 Benin S, Smale M, Pender J, Gebremedhin B, Ehui S (2004) The economic determinants of cereal crop diversity on farms in the Ethiopian highlands. Agric Econ 31(2-3):197–208 Bittinger AK (2010) Crop diversification and technology adoption: the role of market isolation in Ethiopia. Montana State University-Bozeman, College of Agriculture Bommarco R, Kleijn D, Potts SG (2013) Ecological intensification: harnessing ecosystem services for food security. Trends Ecol Evol 28:230–238 Bonham CA, Gotor E, Beniwal BR, Canto GB, Ehsan MD, Mathur P (2012) The patterns of use and determinants of crop diversity by pearl millet ( Pennisetum glaucum (L.) R. Br.) farmers in Rajasthan. Ind J Plant Genet Resour 25(1):85–96 Burchfield EK, de la Poterie AT (2018) Determinants of crop diversification in rice-dominated Sri Lankan agricultural systems. J Rural Stud 61:206–215 Cameron AC, Trivedi PK (2010) Microeconometrics Using Stata, Revised Edition. Stata Press, College Station Chaplin KR, O'Rourke ME, Blitzer EJ, Kremen C (2011) A meta-analysis of crop pest and natural enemy response to landscape complexity. Ecol Lett 14:922–932 Cochran WG (1977) Sampling Techniques, 3rd edn. Wiley, New York Davis TJ, Schirmer IA (1987) Sustainability issues in agricultural development: proceedings of the seventh agriculture sector symposium. Agricultural Sector Symposium, 7 De UK, Chattopadhyay M (2010) Crop diversification by poor peasants and role of infrastructure: Evidence from West Bengal. J Dev Agric Econ 2(10):340–350 Demissie A, Legesse B (2013) Determinants of income diversification among rural households: The case of smallholder farmers in Fedis district, Eastern Hararghe zone, Ethiopia. J Dev Agric Econ 5(3):120–128 Dessie AB, Koye TD, Koye AD, Abitew AA (2019) Analysis of red pepper marketing: evidence from northwest Ethiopia. J Econ Struct 8:24 https://doi.org/10.1186/s40008-019-0156-0 Enete AA, Igbokwe EM (2009) Cassava market participation decisions of producing households in Africa. Tropicultura 27(3):129–136 Escobal J (2001) The determinants of nonfarm income diversification in rural Peru. World Dev 29(3):497–508 FAO (2012) Crop diversification for sustainable diets and nutrition: The role of FAO's Plant Production and Protection Division. In: Technical report, Plant Production and Protection Division. United Nations, Food and Agriculture Organization, Rome Goshu D, Kassa B, Ketema M (2012) Does crop diversification enhance household food security? Evidence from rural Ethiopia. Advan Agric Sci Engineering Res 2(11):503–515 Greene WH (2012) Econometric Analysis, 7th edn. Prentice Hall, New York Gujarati DN (2012) Basic Econometrics. Tata McGraw Hill Education Private Limited, New York Heckman JJ (1979) Sample selection bias as a specification error. Econometrica 47(1):153–161 Hirschman AO (1964) The paternity of an index. Am Econ Rev 54(5):761–762 Huang J-k, Jiang J, Wang J-x, L-l H (2014) Crop diversification in coping with extreme weather events in China. J Integr Agric 13(4):677–686 Ibrahim H, Rahman SA, Envulus EE, Oyewole SO (2009) Income and crop diversification among farming households in a rural area of north central Nigeria. J Tropical Agric Food Environ Exten 8(2):84–89 Johnston GW, Suzanne V, Franz RK, Melissa C (1995) Crop and farm diversification provide social benefits. Calif Agric 49(1):10–16 Kankwamba H, Mapila MATJ, Pauw K (2012) Determinants and spatiotemporal dimensions of crop diversification in Malawi. Project Report produced under a cofinanced research agreement between Irish Aid, USAID and IFPRI, Paper 3 Kanyua MJ, Ithinji GK, Muluvi AS, Gido OE, Waluse SK (2013) Factors influencing diversification and intensification of horticultural production by smallholder tea farmers in Gatanga District, Kenya. Curr Res J Soc Sci 5(4):103–111 Lemi A (2009) Determinants of income diversification in rural Ethiopia: Evidence from panel data. Ethiopian J Econ 18(1) Lin BB (2011) Resilience in agriculture through crop diversification: adaptive management for environmental change. BioScience 61:183–193 Long JS (1997) Advanced quantitative techniques in the social sciences series, Vol. 7. Regression models for categorical and limited dependent variables. Sage Publications, Inc, Thousand Oaks Makate C, Wang R, Makate M, Mango N (2016) Crop diversification and livelihoods of smallholder farmers in Zimbabwe: adaptive management for environmental change. SpringerPlus 5(1):1135 Malik DP, Singh IJ (2002) Crop diversification-An economic analysis. Indian J Agric Res 36(1):61–64 McDaniel MD, Tiemann LK, Grandy AS (2014) Does agricultural crop diversity enhance soil microbial biomass and organic matter dynamics? A meta-analysis. Ecol Appl 24:560–570 Mekuria W, Mekonnen K (2018) Determinants of crop–livestock diversification in the mixed farming systems: evidence from central highlands of Ethiopia. Agric Food Secur 7(1):60 Mesfin W, Fufa B, Haji J (2011) Pattern, trend and determinants of crop diversification: empirical evidence from smallholders in eastern Ethiopia. J Econ Sustainable Dev 2(8):78–89 Michler JD, Josephson AL (2017) To specialize or diversify: agricultural diversity and poverty dynamics in Ethiopia. World Dev 89:214–226 Mussema R, Kassa B, Alemu D, Shahidur R (2015) Determinants of crop diversification in Ethiopia: Evidence from Oromia region. Ethiopian J Agric Sci 25(2):65–76 Nagarajan L, Smale M, Glewwe P (2007) Determinants of millet diversity at the household-farm and village-community levels in the drylands of India: the role of local seed systems. Agric Econ 36(2):157–167 Njeru EM (2016) Crop diversification: a potential strategy to mitigate food insecurity by smallholders in sub-Saharan Africa. J Agric Food Syst Comm Develop 3(4):63–69 Nuru SA, Seebens H (2008) The impact of location on crop choice and rural livelihood: evidences from villages in Northern Ethiopia. Center for Development Research (ZEF), University of Bonn, Germany Rajasekharan P, Veeraputhran S (2002) Adoption of intercropping in rubber smallholdings in Kerala, India: a tobit analysis. Agrofor Syst 56(1):1–11 Rehima M, Belay K, Dawit A, Rashid S (2013) Factors affecting farmers' crops diversification: Evidence from SNNPR, Ethiopia. Int J Agric Sci 3(6):558–565 Rehima M, Dawit A (2012) Red pepper marketing in Siltie and Alaba in SNNPRS of Ethiopia: factors affecting households' marketed pepper. Int Res J Agric Sci Soil Sci 2(6):261–266 Schulte LA, Niemi J, Helmers MJ, Liebman M, Arbuckle JG, James DE, Kolka RK, O'Neal ME, Tomer MD, Tyndall JC, Asbjornsen H, Drobney P, Neal J, Van Ryswyk G, Witte C (2017) Prairkie strips improve biodiversity and the delivery of multiple ecosystem services from corn–soybean croplands. Proc Natl Acad Sci U S A 114:11247–11252 Shezongo-Macmillan J (2005) Women's property rights in Zambia. Paper read at Strategic Litigation Workshop. Johannesburg, South Africa Sichoongwe K (2014) Determinants and Extent of Crop Diversification Among Smallholder Farmers in Southern Zambia. Collaborative Masters Program in Agricultural and Applied Economics Sichoongwe K, Mapemba L, Ng'ong'ola D, Tembo G (2014) The determinants and extent of crop diversification among smallholder farmers: A case study of Southern Province, Zambia. J Agric Sci 6:150–159 Sisay D (2016) Agricultural Technology Adoption, Crop Diversification and Efficiency of Maize-Dominated Smallholder Farming System in Jimma Zone, Southwestern Ethiopia. Haramaya University, Ethiopia Teshome M (2014) Population growth and cultivated land in rural Ethiopia: land use dynamics, access, farm size, and fragmentation. Resour Environ 4(3):148–161 Theil H (1967) Economics and information theory. North-Holland Publishing Company, Amsterdam Tiemann LK, Grandy AS, Atkinson EE, Marin‐Spiotta E, McDaniel MD, Hooper D (2015) Crop rotational diversity enhances belowground communities and functions in an agroecosystem. Ecol Lett 18:761–771 Tobin J (1958) Estimation of relationships for limited dependent variables. Econometrica 26:24–36 Tscharntke T, Klein AM, Kruess A, Steffan-Dewenter I, Thies C (2005) Landscape perspectives on agricultural intensification and biodiversity - ecosystem service management. Ecol Lett 8:857–874 The author would like to thank the University of Gondar since financial support for this research was obtained from the University of Gondar. Moreover, we thank the data respondents, enumerators and district experts for their valuable response during the data collection process. Department of Agricultural Economics, College of Agriculture and Environmental Science, University of Gondar, P.O. Box 196, Gondar, Ethiopia Abebe Birara Dessie , Tadie Mirie Abate & Taye Melese Mekie Department of Plant Science, College of Agriculture and Environmental Science, University of Gondar, Gondar, Ethiopia Yigrem Mengist Liyew Search for Abebe Birara Dessie in: Search for Tadie Mirie Abate in: Search for Taye Melese Mekie in: Search for Yigrem Mengist Liyew in: All authors read and approved the final manuscript. Correspondence to Abebe Birara Dessie. Ethical clearance letters were collected from the University of Gondar research and community service directorate and North Gondar Zone administrative office to care for both the study participants and the researchers. During the survey, official letters were written for each district and kebele/villages/informed verbal consent was obtained from each client, and confidentiality was maintained by giving codes for each respondent rather than recording their name. Study participants were informed that clients have a full right to discontinue or refuse to participate in the study. Hence, all participants throughout the research, including survey households, enumerators, the supervisors, and key informants were fully informed of the objectives of the study. They were approached friendly in free moods until the end of this research Dessie, A.B., Abate, T.M., Mekie, T.M. et al. Crop diversification analysis on red pepper dominated smallholder farming system: evidence from northwest Ethiopia. Ecol Process 8, 50 (2019) doi:10.1186/s13717-019-0203-7 Tobit model
CommonCrawl
Martin Graves1 1Cambridge Univ. Hosp. NHS Found. Trust, United Kingdom The standard method of quantifying cardiovascular velocity and flow in MRI is to use a phase contrast (PC) imaging technique. This presentation will describe the basic principles of the PC method, its practical implementation and clinical optimisation. MR scientist/engineers and clinicians interested in the quantification of blood velocity and flow. Outcome/Objectives Understand the principles of phase contrast techniques and how to apply the methods in research and clinical practice. The effects of motion on the NMR signal was well known before the development of MRI. Both magnitude effects, due to spin washout, and phase effects due to motion along a magnetic field gradient were described in the 1950s/60s. A review of the early work around velocity and flow effects in MRI was published by Axel in 1984 (1). Moran in 1982 (2) introduced the concept of encoding the velocity of a spin into the complex NMR signal, a technique that he called a 'flow-velocity zeugmatographic interlace'. Several groups then developed this technique into a method to directly encode velocity into the phase of the signal using balanced gradient pulses (3-5). It was, however, noted that the background field uniformity was a confounder to the accuracy of the technique, particularly for gradient echo based sequences. To eliminate these background phase shifts two acquisitions are performed along each gradient direction with the bipolar velocity-encoding gradients modified for the second acquisition. The phase images for both acquisitions are then calculated and subtracted, cancelling the stationary background phase and leaving only positive and negative phase shifts depending upon the direction of blood flow (6). Spins moving with a velocity $$$v$$$ along the direction of a magnetic field gradient of amplitude $$$G$$$ and duration $$$T$$$ accumulate phase according to $$$\phi=\gamma\cdot v\cdot G\cdot T^2$$$. The product $$$ G\cdot T^2$$$ is usually referred to as the first moment of the gradient $$$(M_1)$$$. If we perform two acquisitions with different first moments, then the phase difference is given by $$$\triangle\phi=\gamma\cdot v\cdot \triangle M_1$$$. The phase/velocity relationship is scaled through a user-controlled velocity encoding value, known as the venc. Since we have $$$2\pi$$$ of unique phase available, flow in one direction, relative to the gradient direction, is allocated $$$0$$$ to $$$+\pi$$$, whilst flow in the opposite direction is allocated $$$0$$$ to $$$-\pi$$$. The venc is the maximum velocity, along each direction, that will result in a $$$\pi$$$ phase shift, i.e. $$$ venc=\frac{\pi}{\gamma\cdot\triangle M_1}$$$. To minimise the echo time (TE) the velocity encoding gradients are usually combined with the imaging gradients. In a 'asymmetric' acquisition, one acquisition has the gradients calculated to yield a zero-phase shift for spins moving with a constant velocity, i.e.,$$$M_1=0$$$. In the second acquisition the gradient amplitudes are modified to yield the desired phase/velocity sensitivity. In a 'symmetric' acquisition the phase/velocity sensitivity is shared equally between the two acquisitions $$$(\pm \frac{\triangle M_1}{2})$$$ (7). Figure 1(a) shows two adjacent TR periods for a 'symmetric' acquisition. In quantitative velocity/flow imaging we usually perform a single slice PC acquisition perpendicular to the direction of the vessel, i.e. through plane encoding. Usually a multiple temporal-phase acquisition is performed that is synchronised to the subjects' cardiac cycle. Such an acquisition is often termed cine phase contrast (CPC) imaging. The velocity-encoding gradients are usually applied along the slice selection direction to quantify velocities through the slice, and the two encodings are usually interleaved within the same heartbeat to minimize spatial misregistration. Fast imaging methods such as segmented k-space acquisitions (8) are often used with CPC imaging, as they are in standard functional cardiac imaging, to reduce overall acquisition times Figure 1(b) shows a segmented PC acquisition. However, care needs to be taken in terms of the effective temporal resolution of the acquisition since increasing the segment duration results in an increased smoothing or low-pass filtering of the time-resolved flow waveform (9). Figure 2 shows images from a typical CPC acquisition. Analysis of the images generally involves outlining of the vessel on the magnitude images and applying the outline to the phase image. The average phase value within the region can then be converted into an average velocity (ms-1). Furthermore, since we know the area (m2) of the region for each temporal phase we can multiply that by the mean velocity to obtain the instantaneous flow volume (m3s-1). In addition to average velocities it is possible to determine the peak velocity within the region as this may also be diagnostically useful. If the actual velocity exceeds the user-selected venc then aliasing will occur. It is possible to 'unwrap' this aliasing otherwise it will be necessary to repeat the acquisition using a higher venc. Note that velocity images will appear noisier as the venc is increased. Providing that the velocity $$$v < venc$$$ then the signal-to-noise ratio (SNR) of the phase difference image can be expressed as $$$SNR_{\triangle\phi}\propto SNR_M\cdot\frac{v}{venc}$$$ , where $$$ SNR_M$$$ is the SNR of the magnitude image (10). Even though the phase images from positive and negative flow encodings are subtracted to eliminate background phase errors, there may be residual errors due to the different eddy currents produced by the different flow encoding gradient amplitudes. These errors appear as offsets in the data, so typically the stationary background is no longer zero. Some correction of the background signal offset may therefore be required, otherwise regurgitant or shunt flow measurement accuracy may be compromised (11). Other residual phase errors such as those due to concomitant fields/Maxwell terms (12) and gradient field non-linearities (13) can also be corrected. The accuracy of CPC measurements can also be affected by several imaging parameters, such as ensuring there are at least 16 pixels covering the vessel area and using high receiver bandwidths and an in-phase TE to minimise the effects of chemical shift-induced phase errors (10, 14). 4D Methods Velocity mapping can also be performed in-plane by applying the flow-encoding gradients on the appropriate axis. Furthermore, it is possible to acquire velocity data along all three directions, which when combined as part of multi temporal phase 3D acquisition can provide 4D velocity quantification and visualisation. This data can be used to measure velocity/flow in any orientation and also used to calculate temporally-resolved 4D flow vectors and streamlines. These acquisitions are quite time consuming and require either respiratory gating, typically using navigators, or signal averaging (15). 1. Axel L. Blood flow effects in magnetic resonance imaging. AJR Am J Roentgenol. 1984; 143(6): 1157-66. 2. Moran PR. A flow velocity zeugmatographic interlace for NMR imaging in humans. Magn Reson Imaging. 1982; 1(4): 197-203. 3. Bryant DJ, Payne JA, Firmin DN, Longmore DB. Measurement of flow with NMR imaging using a gradient pulse and phase difference technique. J Comput Assist Tomogr. 1984; 8(4): 588-93. 4. van Dijk P. Direct cardiac NMR imaging of heart wall and blood flow velocity. J Comput Assist Tomogr. 1984; 8(3): 429-36. 5. Wedeen VJ, Rosen BR, Chesler D, Brady TJ. MR velocity imaging by phase display. J Comput Assist Tomogr. 1985; 9(3): 530-6. 6. Ridgway JP, Smith MA. A technique for velocity imaging using magnetic resonance imaging. Br J Radiol. 1986; 59(702): 603-7. 7. Bernstein MA, Shimakawa A, Pelc NJ. Minimizing TE in moment-nulled or flow-encoded two- and three-dimensional gradient-echo imaging. J Magn Reson Imaging. 1992; 2(5): 583-8. 8. Foo TK, Bernstein MA, Aisen AM, Hernandez RJ, Collick BD, Bernstein T. Improved ejection fraction and flow velocity estimates with use of view sharing and uniform repetition time excitation with fast cardiac techniques. Radiology. 1995; 195(2): 471-8. 9. Polzin JA, Frayne R, Grist TM, Mistretta CA. Frequency response of multi-phase segmented k-space phase-contrast. Magn Reson Med. 1996; 35(5): 755-62. 10. Nayak KS, Nielsen JF, Bernstein MA, Markl M, P DG, R MB, et al. Cardiovascular magnetic resonance phase contrast imaging. J Cardiovasc Magn Reson. 2015; 17: 71. 11. Gatehouse PD, Rolf MP, Graves MJ, Hofman MB, Totman J, Werner B, et al. Flow measurement by cardiovascular magnetic resonance: a multi-centre multi-vendor study of background phase offset errors that can compromise the accuracy of derived regurgitant or shunt flow measurements. J Cardiovasc Magn Reson. 2010; 12: 5. 12. Bernstein MA, Zhou XJ, Polzin JA, King KF, Ganin A, Pelc NJ, et al. Concomitant gradient terms in phase contrast MR: analysis and correction. Magn Reson Med. 1998; 39(2): 300-8. 13. Markl M, Bammer R, Alley MT, Elkins CJ, Draney MT, Barnett A, et al. Generalized reconstruction of phase contrast MRI: analysis and correction of the effect of gradient field distortions. Magn Reson Med. 2003; 50(4): 791-801. 14. Lotz J, Meier C, Leppert A, Galanski M. Cardiovascular flow measurement with phase-contrast MR imaging: basic facts and implementation. Radiographics. 2002; 22(3): 651-71. 15. Markl M, Chan FP, Alley MT, Wedding KL, Draney MT, Elkins CJ, et al. Time-resolved three-dimensional phase-contrast MRI. J Magn Reson Imaging. 2003; 17(4): 499-506. a) shows two TRs of a PC sequence incorporating additional bipolar gradients (ringed in blue) to encode velocity into the signal phase. The two encodings (green and red) are performed in subsequent TRs. In this example the encodings are applied along the slice select direction to encode velocity through the slice. In (b) the encodings are shown as part of a retrospectively gated segmented k-space acquisition. Multiple views are acquired in each R-R interval. In this case four views are acquired for each encoding. Since the temporal resolution is 8xTR in this example, view sharing has been used to improve the apparent temporal resolution. Images from a typical CPC acquisition through the ascending aorta (Ao) and descending aorta (DA). (a) shows the phase/velocity image and (b) shows the magnitude image from a single temporal phase. (c) Shows the first ten temporal phases. (d) Shows the flow-time curve. The flow (ml s-1) at each temporal phase is obtained by multiplying the average velocity in each ROI (mm s-1) with the area (mm2). If the instantaneous flow is plotted against time for all the temporal phases, the area under the curve represents the stroke volume (SV). Multiplying the SV by the heart rate gives the cardiac output (ml min-1).
CommonCrawl
Stats On the T Dedicated to data, statistics, and tennis Head-to-Head Effects March 15, 2019 matchups, head-to-head Matchup effects are a common idea in tennis commentary. It is the thing at the heart of comments like 'her game matches up well' against her opponent. One way to think of a matchup effect is as a surprising head-to-head, when results go against what the overall ability of both players would have us expect. Do such effects exist? And are they substantial enough that they matter when it comes to making better predictions about tennis results? I've been working on predicting wins in tennis for some time. Whenever I've had a chance to talk about the method with tennis insiders, there is always one question I know will come up (if only predicting match results was as easy to forecast!). That is, whether I account for head-to-head. In one sense, any method that includes the historical results of a player is accounting for head-to-head. But I know that this isn't exactly what the question is getting at. The question really comes down to matchup effects and the particular edge one player might have over a specific opponent that goes beyond what their ability would predict, factors like style or intimidation, for example. Most prediction methods (and I've explored many over the years) assume that players abilities are transitive. That is, if player A is two times greater in overall ability than players B and C, than his win expectations in a match against B and C should be equal. Matchup effects throw a wrench into the transitive property and effectively act like an interaction term, where the overall abilities of players can't in themselves explain the results of their matches against each other. That description gets us on track to how we might identify the presence of head-to-head effects. Suppose we have our favourite approach for predicting the chance player $i$ wins a match against player $j$ that doesn't account for matchup effects (so this might exclude bookmaker odds, for example). Call this expectation $\hat{p}_{ij}$. A basic model to account for head-to-head is: $$logit[P(W_{ij} = 1)] = \beta_0 + \beta_1 \hat{p}_{ij} + \alpha_{ij}$$ The parameter $\alpha_{ij}$ is the key here. It is an unknown constant for the specific matchup that adjusts our expectations when $\hat{p}_{ij}$ assumes abilities are transitive. What is a Typical Head-to-Head? Before we fit such a model, even before we decide whether to use likelihood-based or Bayesian methods, we need to decide which data to use. Just considering the men's game, do we include Futures? Challengers? or limit it to ATP events only? The answer comes down to where most repeat clashes are happening. We know that tennis is a pyramid, with the size of the competitive pool shooting upward the lower down you go down the tournament tier. We might suspect that, for this reason, players at the Challenger level, for example, will not often build up substantial head-to-heads against other players on the tour. But what does the data actually show? Considering all matches played in the past 10 years at the Challenger level having any head-to-head turns out to be quite. If we consider how likely it is that two players drawn at random from the competitor pool (anyone who ever competed in a Challenger event or better during this period) have played each other in competition before, the chance is just 2 out of 100. The chance that any random pairs has a match history of 3 or more is a 3 in 1000 chance. We might have already thought that the 53-match rivalry of Nadal and Djokovic was unusually long but this summary tells us that even a 3-match history is a 1% of 1% event in the professional game. Head-to-Head Sample Size Frequency (+ Challengers) Frequency (- Challengers) 0 98.4% 96.0% 1 1.2% 2.5% 3+ 0.1% 0.7% Even if we drop Challenger events, the break down doesn't change dramatically. Among players competing primarily at the tour-level, the chance is still under 1% that any pair among them will have played 3 matches or more over a 10 year period. Why does any of this matter for estimating head-to-head effects? If there are many cases of player-opponent matchups with a single match, it can impact the accuracy of estimates of the head-to-head effects, pulling all effects closer to zero than would be the case with a less sparse sample. The infrequency of lengthy head-to-heads also suggests that any adjustment for them (if such were warranted) is unlikely to have a substantial improvement in prediction performance more generally. The number of matches in a season that would involve players with an even moderate match history would be few. But we will come back to this later. We have seen that having played more than 3 matches against an opponent is an unusually long history compared to the norm. Among matches played for players in this 'long' rivalry subgroup, 1 in 3 of the matches happened at the Challenger level, 1 in 4 at the 250 level, and 1 in 5 at the Masters level. Given the preponderance of singleton head-to-heads, I will examine matchup effects among player-opponent pairs with a history of 2 or more matches prior to 2018 at the Challenger level or above (reserving 2018 and 2019 matches for out-of-sample testing). For the $\hat{p}_{ij}$, I will use a surface-adjusted Elo rating1. This is a dynamic transitive model that also accounts for surface specialty. In using this as the prediction covariate, the aim is for the head-to-head to capture any intransitive effects no explained by the overall ability or surface preferences of the players. When a logistic mixed model was fit to these matchups, the standard deviation for the player-opponent random effect was $\sigma = 0.40$, suggesting evidence of matchup effects. When we look at the conditional means for the specific effect estimates $\hat{\alpha}_{ij}$ 1 in 6 would imply an adjustment in predictions of 15% or more. Another indication of statistically meaningful head-to-head effects. The chart below is a forest plot of the 100 biggest head-to-head effects found in the men's sample. The effect is expressed in terms of the factor you would multiply the standard odds by for these players to account for the head-to-head effect. The player who benefits from the head-to-head effect is the one on the left in the x-axis labels. Larger dots indicate the effects with greater relative certainty. There are a lot of interesting things to pick out of these effects (and it is only a sample of the 100 largest ones!). At the very top of the chart we see a cluster of head-to-heads involving Stan Wawrinka and Tomas Berdych, their own head-to-head being the one with the largest effect overall. These two have faced each other 16 times in their career with Wawrinka leading 11 to 5; 10 times since 2010 with Wawrinka dropping just 1 match. My surface-adjusted ratings show that Berdych was ahead of Wawrinka for all of their last 10 matches, though they have also been close, differing by no more than 50 points at any one occasion. That would make Wawrinka's edge over Berdych look surprisingly lopsided. It is a similar story with Wawrinka's matchup against Cilic. Wawrinka has won all of their 8 most recent matches (they've faced each other 14 times overall) despite Cilic surpassing Wawrinka in the player ratings in 2016 and 2017. There are several players who pop up in multiple of the biggest head-to-head effects. Fabio Fognini features 7 times in the list, having the benefit of the matchup class for 5 of these, with the biggest edge being over Roberto Bautista Agut. Berdych is the next most frequently occurring player in the list with 6 head-to-heads, which is the same number that Horacio Zeballos appears in. David Ferrer comes up in a total of 5 of the head-to-heads and is at the wrong end of the stick for 4 of the 5 (vs Andy Murray, Novak Djokovic, Stan Wawrinka, and Kei Nishikori). Ferrer is an interesting case because he is often considered one of the best players who always came short against the Big 4. This tells us that the head-to-head effects could also come up when a player has a ceiling effect in their ability and not necessarily playing style clashes alone. There are some head-to-heads people might have expected to see in the top 100 that don't appear. Rafael Nadal and Roger Federer is one. The head-to-head effect in this case gives a 7% boost to Rafael Nadal, not insignificant but also not as big as some might have thought. I think this can be explained by the fact that most of Nadal's wins over Federer have been on clay (13 of 23) where his surface-adjusted rating would have explained that record. The clash between Federer and Wawrinka gets a bigger head-to-head effect, with Federer getting a 20% boost in his odds when facing Wawrinka. That would make Federer's recent win over Wawrinka at Indian Wells that much less surprising. Another interesting matchup with relevance for Indian Wells was Gael Monfils win over Philipp Kohlschreiber. With Kohlschreiber coming off a massive upset over Novak Djokovic, many might have thought it was his time to go deep. Monfils was going to be a hard opponent for anyone but the head-to-head effect suggest it is an especially tricky matchup for Kohli, whose matchup against Monfils puts him in the top 100 and would give him a 30% disadvantage against Monfils. Prediction Improvement We don't even have to apply the head-to-head correction to know that it isn't going to move prediction improvement upward for the majority of matches. There are just too few matches between players who have played each other at all in the past to get any benefit from such an adjustment. But that doesn't mean that the correction isn't valuable. The nature of tennis means that the biggest rivalries will tend to be between the most well-known players. The three biggest rivalries in the dataset used in this post are Nadal v Djokovic, Federer v Djokovic, and Djokovic v Murray. So, although the occurrence of big n head-to-heads are rare, they are high impact matches when they occur. So if we focus just on matches where head-to-head might matter, what do we find? Using matches from 2018-2019 as the test data, there were 754 that involved players with a head-to-head history of more than 3 matches. The overall change in prediction accuracy from the standard prediction to one with the head-to-head correction was negligible for this group. Number of Matches Standard Prediction Accuracy +H2H Prediction Accuracy Matchups with n>3 Match History 754 66.6% 66.7% Subgroup of Close Matches 233 54.9% 55.4% Subgroup where H2H Changed Predicted Winner 21 47.6% 52.4 % If we look at the subset among these that were close, where the standard prediction was somewhere between 40% and 60%, does the correction perform any better? The accuracy was 55.4% with head-to-head effects versus 54.9%, which is an improvement but one that might not hold up with repeated sampling, given that it is based on 233 matches. The final subgroup we consider are the cases where the consideration of head-to-head actually reversed the favoured player (the player with >50% win expectation). There were only 21 matches of this type, a small sample, but one that showed the biggest difference in the use of head-to-head or not, with a 5 point gain in accuracy for the prediction with the head-to-head correction. Because this last subgroup is an especially interesting one, and a group where the correction for matchup has the biggest impact, I've plotted each of those matches below. The result and predictions are all with respect to the first player in the matchup. The chart shows, for example, that Stefanos Tsitsipas would have gotten +2 percentage points toward his win prediction when he faced David Goffin this year in Marseille owing to the head-to-head correction, a match he did win. Although there were more changes in the right direction, there were plenty in the wrong direction as well. Marin Cilic's loss to Alexander Zverev at the 2018 ATP World Tour Finals is a prime example. The head-to-head correction pulled the match much more in favour of Cilic (53% versus 45%) but the match went to Zverev in that instance. Style versus Head-to-Head Even with some evidence of improvement in close matches with lengthy head-to-heads, small sample sizes still make adjusting for head-to-head difficult. The most obvious way to overcome this would be to look at grouping players by their style of play, which would allow us to draw strength from players with similar styles to estimate matchup effects. It is the obvious path but how to go about defining playing style if far from clear. For now, adjustment for specific head-to-heads has some merits and also reveals both expected and surprising results about the biggest matchup effects in the game. Technically, the Elo rating system supposes a linear relationship between the log odds for the win and the rating difference of the player. However, using the probability transformation of the prediction has some advantages in terms of numerical stability of the model. But the choice of transformation doesn't have a huge impact on the results either way. [return] About Stephanie Kovalchik Blog Founder, Senior Data Scientist at the Game Insight Group at Tennis Australia, and researcher at the Institute for Health and Sport at Victoria University. About Graeme Spence Data Scientist in the Game Insight Group at Tennis Australia and researcher at the Institute for Health and Sport at Victoria University. Men's Title Predictions for Indian Wells WTA Head-to-Head Effects Live Tennis Scores Tennis Results Unicorns of the WTA Early Ups and Downs on Clay Putting a Basic Playing Style Classifier to the Test More Exploration on Using Match Stats to Classify Playing Styles What Can Match Stats Tell Us About Playing Styles? Top Pressure Performances in March Women's Title Predictions for Indian Wells Backhand Big-Points Breaking-Through Cutch Data-Science Del-Potro Durations First-Slam French-Open Game-Age Gaussian-Process Gender-Bias Grand-Slams Indian-Wells Laver-Cup Roger-Federer Serena-Williams Sports-Analytics Surprising Zverev age atp backhand best big-points breaking-through clay clutch comeback cutch data-science del-potro demand depth dominance doubles draw duration durations elo epochs excitement fatigue federer first-slam french-open game-age gaps gaussian-process gender gender-bias goat grand-slams grass hackathon head-to-head head-to-heads importance indian-wells injury laver-cup leaderboards leaders luck masters matchups momentum nadal next-gen novak prediction predictions premiers pressure programming quality r rating ratings research retirements return roger-federer scheduling scoreboard season seasons seeding seeds serena serena-williams serve sloan sports-analytics stories strength style surface surprise surprising suspended tennis-news us-open weather wimbledon wta zverev © 2019 Stats On the T. Generated with Hugo and Mainroad theme.
CommonCrawl
AlphaBeta: computational inference of epimutation rates and spectra from high-throughput DNA methylation data in plants Yadollah Shahryary1,2, Aikaterini Symeonidi1, Rashmi R. Hazarika1,2, Johanna Denkena3, Talha Mubeen1,2, Brigitte Hofmeister4, Thomas van Gurp7, Maria Colomé-Tatché3,5,6, Koen J.F. Verhoeven7, Gerald Tuskan8, Robert J. Schmitz2,9 & Frank Johannes1,2 Genome Biology volume 21, Article number: 260 (2020) Cite this article A Research to this article was published on 06 October 2020 Stochastic changes in DNA methylation (i.e., spontaneous epimutations) contribute to methylome diversity in plants. Here, we describe AlphaBeta, a computational method for estimating the precise rate of such stochastic events using pedigree-based DNA methylation data as input. We demonstrate how AlphaBeta can be employed to study transgenerationally heritable epimutations in clonal or sexually derived mutation accumulation lines, as well as somatic epimutations in long-lived perennials. Application of our method to published and new data reveals that spontaneous epimutations accumulate neutrally at the genome-wide scale, originate mainly during somatic development and that they can be used as a molecular clock for age-dating trees. Cytosine methylation is an important chromatin modification and a pervasive feature of most plant genomes. It has major roles in the silencing of transposable elements (TEs) and repeat sequences and is also involved in the regulation of some genes [1]. Plants methylate cytosines at symmetrical CG and CHG sites, but also extensively at asymmetrical CHH sites, where H= A, T, C. The molecular pathways that establish and maintain methylation in these three sequence contexts are well-characterized [2] and are broadly conserved across plant species [3–7]. Despite its tight regulation, the methylation status of individual cytosines or of clusters of cytosines is not always faithfully maintained across cell divisions. As a result, cytosine methylation is sometimes gained or lost in a stochastic fashion, a phenomenon that has been termed "spontaneous epimutation." In both animals and plants, spontaneous epimutations have been shown to accumulate throughout development and aging [8], probably as a byproduct of the mitotic replication of small stem cell pools that generate and maintain somatic tissues. However, in plants, spontaneous epimutations are not only confined to somatic cells, but occasionally pass through the gametes to subsequent generations [9, 10]. In the model plant Arabidopsis thaliana (A. thaliana), these transgenerationally heritable (i.e., "germline") epimutations are mainly restricted to CG sites and appear to be absent or not detectable at CHG and CHH sites [11–14]. Initial estimates in A. thaliana indicate CG "germline" epimutations are about five orders of magnitude more frequent than genetic mutations (∼10−4 vs. ∼10−9 per site per haploid genome per generation) [12, 14–16]. Because of these relatively high rates, CG methylation differences accumulate rapidly in the A. thaliana genome and generate substantial methylation diversity among individuals in the course of only a few generations [12, 17–19] [20]. A key experimental challenge in studying epimutational processes in a multi-generational setting is to be able to distinguish "germline" epimutations from other types of methylation changes, such as those associated with segregating genetic variation or transient environmental perturbations [21]. Mutation accumulation (MA) lines grown in controlled laboratory conditions are a powerful experimental system to achieve this. MA lines are derived from a single isogenic founder and are independently propagated for a large number of generations. The lines can be advanced either clonally or sexually, i.e., self-fertilization or sibling mating (Fig. 1a). In clonally produced MA lines, the isogenicity of the founder is not required because the genome is "fixed" due to the lack of genetic segregation. Overview of the AlphaBeta computational pipeline. a Top panel: Construction of multi-generational (G0 to GN) mutation accumulation (MA) lines through sexual (selfing or sibling mating) or asexual (clonal) propagation. The different lineages (L1 to L3) can be represented as a pedigree. The pedigree branch point times and the branch lengths are typically known, a priori, from the experimental design. 5mC sampling can be performed at selected generations, either from plant material of direct progenitors or from siblings of those progenitors. The data can be used to estimate the rate and spectrum of "germline" epimutations. Bottom panel: Long-lived perennials, such as trees, can be viewed as a natural mutation accumulation system. The tree branching structure can be treated as an intra-organismal phylogeny of somatic lineages. 5mC samples can be performed on leaf tissues from selected branches. Along with coring data, the leaf methylomes can be used to estimate the rate and spectrum of somatic epimutations. b Data pre-processing: AlphaBeta requires methylation data and pedigree data as input. File conversion: Using the input files, AlphaBeta calculates the 5mC divergence (D) as well as divergence time (Δt) between all sample pairs. Model estimation: AlphaBeta fit competing epimutation models to the data. The model parameters are estimated using numerical non-linear least squares optimization. Model comparisons allow for tests of selection and neutrality The kinship among the different MA lineages can be presented as a pedigree (Fig. 1a). The structure (or topology) of these pedigrees is typically known, a priori, as the branch-point times and the branch lengths are deliberately chosen as part of the experimental design. In conjunction with multi-generational methylome measurements, MA lines therefore permit "real-time" observations of "germline" epimutations against a nearly invariant genomic background and can facilitate estimates of the per-generation epimutation rates [11]. Sequenced methylomes from a large number of sexually derived MA lines are currently available in A. thaliana [12–14, 18, 22, 23] and rice [24], and various other MA lines are currently under construction for epimutation analysis in different genotypes, environmental conditions, and plant species. Beyond experimentally derived MA lines, natural mutation accumulation systems can also be found in the context of plant development and aging. An instructive example is long-lived perennials, such as trees, whose branching structure can be interpreted as a pedigree (or phylogeny) of somatic lineages that carry information about the epimutational history of each branch [25]. In this case, the branch-point times and the branch lengths can be determined ad hoc using coring data or other types of dating methods (Fig. 1a). By combining this information with contemporary leaf methylome measurements, it is possible to infer the rate of somatic epimutations as a function of age (see also co-submission, [26]). Attempts to infer the rate of spontaneous epimutations in these diverse plant systems are severely hampered by the lack of available analytical tools. Naive approaches that try to count the number of epimutations per some unit of time cannot be used in this setting, because DNA methylation measurements are far too noisy. On the technological side, this noise stems from increased sequencing and alignment errors of bisulphite reads and bisulphite conversion inefficiencies. On the biological side, increased measurement error may result from within-tissue heterogeneity in 5mC patterns [27] and the fact that DNA methylomes are in part transcriptionally responsive to variation in environmental/laboratory conditions [28]. To overcome these challenges, we previously implemented a model-based estimation method, which was originally designed for the analysis of selfing-derived mutation accumulation lines [12]. This approach appropriately accounts for measurement error in the data by describing the time-dependent accumulation of epimutations through an explicit statistical model (Fig. 1b). Fitting this model to pedigree-based 5mC measurements yields estimates of the rate of spontaneous methylation gains and losses and provides a quantitative basis for predicting DNA methylation dynamics over time. Here, we generalize this method and present AlphaBeta, the first software package for inferring the rate and spectrum of "germline" and somatic epimutations in plants. AlphaBeta can be widely applied to multi-generational data from sexually or asexually derived MA lines, as well as to intra-generational data from long-lived perennials such as trees. Drawing on novel and published data, we demonstrate the power and versatility of our approach and make recommendations regarding its implementation. The AlphaBeta method We start from the assumption that 5mC measurements have been obtained from multiple sampling time-points throughout the pedigree. These measurements can come from whole genome bisulphite sequencing (WGBS) [29] [30], reduced representation bisulphite sequencing (RRBS) [31], or epigenotyping-by-sequencing (epiGBS) [32] technologies, and possibly also from array-based methods. We only require that a "sufficiently large" number of loci has been measured. Moreover, with multigenerational data, we allow measurements to come from plant material of direct progenitors, or else from individual or pooled siblings of those progenitors (Fig. 1a). Calculating 5mC divergence For the ith sequenced sample in the pedigree, let sik be the observed methylation state at the kth locus (k=1⋯N). Here, the N loci can be individual cytosines or pre-defined regions (i.e., cluster of cytosines). We assume that sik takes values 1, 0.5, or 0, according to whether the diploid epigenotype at that locus is m/m,m/u,u/u, respectively, where m is a methylated and u is an unmethylated epiallele. Using this coding, we calculate the mean absolute 5mC divergence, D, between any two samples i and j in the pedigree as follows: $$\begin{array}{@{}rcl@{}} D_{ij} = \sum_{k=1}^{N} I(s_{ik}, s_{jk})N^{-1}, \end{array} $$ where I(·) is an indicator function, such that $$I(s_{ik}, s_{jk})= \left\{ \begin{array}{ll} 0 \text{ if} s_{ik} = s_{jk} \\ \frac{1}{2} \text{ if} s_{ik} = 0.5 \text{ and} s_{jk} \in \{0,1\} \\ \frac{1}{2} \text{ if} s_{jk} = 0.5 \text{ and} s_{ik} \in \{0,1\} \\ 1 \text{ if} s_{ik} = 0 \text{ and} s_{jk} = 1 \\ 1 \text{ if} s_{jk} = 1 \text{ and} s_{ik} = 0. \\ \end{array} \right. $$ The software automatically calculates Dij and Δt for all unique sample pairs using as input the methylation state calls and the pedigree coordinates of each sample (Fig. 1b). Modelling 5mC divergence We model the 5mC divergence as $$\begin{array}{@{}rcl@{}} D_{ij} = c + D^{\bullet}_{ij}(M_{\Theta}) + \epsilon_{ij}. \end{array} $$ Here, εij∼N(0,σ2) is the normally distributed residual error, c is the intercept, and \(D^{\bullet }_{ij}(M_{\Theta })\) is the expected divergence between samples i and j as a function of an underlying epimutation model M(·) with parameter vector Θ (see below). We have that $$\begin{array}{@{}rcl@{}} D^{\bullet}_{ij}(M_{\Theta}) &=& \sum_{n\in v} \sum_{l\in v} \sum_{m\in v} I(l,m) \\ &\cdot& Pr(s_{ik}=l, s_{jk}=m|s_{ijk}=n,M_{\Theta})\\ &\cdot& Pr(s_{ijk}=n|M_{\Theta}), \end{array} $$ where sijk is the methylation state at the kth locus of the most recent common ancestor of samples i and j, and v={0,0.5,1}. Since samples si and sj are conditionally independent, we can further write: $$\begin{array}{@{}rcl@{}} Pr(s_{ik}, s_{jk}|s_{ijk},M_{\Theta}) &=& Pr(s_{ik}|s_{ijk},M_{\Theta}) \\ &\cdot& Pr(s_{jk}|s_{ijk},M_{\Theta}). \end{array} $$ To be able to evaluate these conditional probabilities, it is necessary to posit an explicit form for the epimutational model, MΘ. To motivate this, we define G to be a 3×3 transition matrix, which summarizes the probability of transitioning from epigenotype l to m in the time interval [t,t+1]: $$\begin{array}{*{20}l} &\qquad u/u (t+1) {\kern5pt} m/u (t+1) {\kern10pt} m/m (t+1)\\ \mathbf{G}&= \left[\begin{array}{ccc} f_{11}(\alpha, \beta, w) & f_{12}(\alpha, \beta, w) & \cdot\\ f_{21}(\alpha, \beta, w) & \cdot & \cdot \\ \cdot & \cdot & f_{33}(\alpha, \beta, w) \\ \end{array}\right] \begin{array}{c} u/u \;(t)\\ m/u \,(t)\\ m/m (t) \end{array} \end{array} $$ The elements of this matrix are a function of gain rate α (i.e., the probability of a stochastic epiallelic switch from an unmethylated to a methylated state within interval [t,t+1]), the loss rate β (i.e., the probability of a stochastic epiallelic switch from a methylated to an unmethylated state), and the selection coefficient w (w∈[0,1]). It can be shown that for a diploid system propagated by selfing, G has the form $$\left[\begin{array}{ccc} (1-\alpha)^{2} & 2(1-\alpha)\alpha & \alpha^{2} \\ \frac{1}{4}(\beta+1-\alpha)^{2} & \frac{1}{2}(\beta+1-\alpha)(\alpha+1-\beta) & \frac{1}{4}(\alpha+1-\beta)^{2} \\ \beta^{2} & 2(1-\beta)\beta & (1-\beta)^{2} \end{array}\right]\circ \; \mathbf{W}, $$ and for systems that are propagated clonally or somatically G is: $$\left[\begin{array}{ccc} (1-\alpha)^{2} & 2(1-\alpha)\alpha & \alpha^{2} \\ \beta(1-\alpha) & (1-\alpha)(1-\beta)+\alpha\beta & \alpha(1-\beta)\\ \beta^{2} & 2(1-\beta)\beta & (1-\beta)^{2} \end{array}\right] \circ \; \mathbf{W}, $$ where ∘ is the Hadamard product and W is a matrix of selection coefficients of the form $$\left[\begin{array}{ccc} w & \frac{(w+1)}{2} & 1 \\ w & \frac{(w+1)}{2} & 1 \\ w & \frac{(w+1)}{2} & 1 \\ \end{array}\right] \text{ or } \left[\begin{array}{ccc} 1 & \frac{(w+1)}{2} & w \\ 1 & \frac{(w+1)}{2} & w \\ 1 & \frac{(w+1)}{2} & w \\ \end{array}\right] $$ depending on whether selection is against epiallele u or m, respectively. Using this formalism, we can distinguish four different models, which we denote by ABneutral, ABmm, ABuu, and ABnull. Model ABneutral assumes that the accumulation of spontaneous 5mC gains and losses is selectively neutral (w=1,α and/or β>0). In this special case, all epigenotype transitions from time t to t+1 are only governed by the rates α and β, and—in the case of selfing—also by the Mendelian segregation of epialleles u and m. The selection models ABmm and ABuu, by contrast, assume that epimutation accumulation is in part shaped by selection against spontaneous losses or gains of 5mC, respectively (0≤w<1,α and/or β>0). For example, with selection in favor of epiallele u (model ABuu), the fitness of epihomozygote m/m and epiheterozygote m/u are reduced by a factor of w and (w+1)/2, respectively. We incorporate this fitness loss directly into the transition matrix by weighing the transition probabilities to these epigenotypes accordingly [33]. Similar arguments hold for the case where selection is for epiallele m. As a reference, we define model ABnull as the null model of no accumulation, with α=0,β=0, and w=1. To ensure that the rows of G (i.e., the transition probabilities) still sum to unity in the presence of selection, we redefine G using the normalization: $$\mathbf{G^{\prime}} = \left[\begin{array}{ccc} (\sum_{i} \mathbf{G}_{1i})^{-1} & 0 & 0 \\ 0 & (\sum_{i} \mathbf{G}_{2i})^{-1} & 0 \\ 0 & 0 & (\sum_{i} \mathbf{G}_{3i})^{-1} \\ \end{array}\right] \cdot \mathbf{G} $$ Based on Markov chain theory, the conditional probability Pr(sik|sijk,MΘ) can then be expressed in terms of G′ as follows: $$\begin{array}{@{}rcl@{}} {\sum_{n}} Pr(s_{ik}=0|s_{ijk}=n, M_{\Theta}) = \sum_{r=1}^{3}(\mathbf{G^{\prime}}^{t_{i} - t_{ij}})_{r1} \\ {\sum_{n}} Pr(s_{ik}=0.5|s_{ijk}=n, M_{\Theta}) = \sum_{r=1}^{3}(\mathbf{G^{\prime}}^{t_{i} - t_{ij}})_{r2} \\ {\sum_{n}} Pr(s_{ik}=1|s_{ijk}=n, M_{\Theta}) = \sum_{r=1}^{3}(\mathbf{G^{\prime}}^{t_{i} - t_{ij}})_{r3} \end{array} $$ where ti is the time-point corresponding to sample i and tij is the time-point of the most recent common ancestor shared between samples i and j, (tij≤ti,tj), and r is a row index. Expressions for Pr(sjk|sijk,MΘ,tj) can be derived accordingly, by simply replacing ti by tj in the above equation. Note that the calculation of these conditional probabilities requires repeated matrix multiplication. However, a direct evaluation of these equations is also possible using the fact that $$\begin{array}{@{}rcl@{}} \mathbf{G^{\prime}}^{t_{i} - t_{ij}} = \mathbf{p}\mathbf{V}^{t_{i} - t_{ij}}\mathbf{p}^{-1} \text{ and } \mathbf{G^{\prime}}^{t_{j} - t_{ij}} = \mathbf{p}\mathbf{V}^{t_{j} - t_{ij}}\mathbf{p}^{-1}, \end{array} $$ where p is the eigenvector of matrix G′ and V is a diagonal matrix of eigenvalues. For selfing and clonal/somatic systems, these eigenvalues and eigenvectors can be obtained analytically. Finally, to derive \(D^{\bullet }_{ij}(M_{\Theta })\), we also need to supply Pr(sijk=n|MΘ); that is, the probability that locus k in the most recent common ancestor of samples i and j is in state n (n∈{0,0.5,1}). To do this, consider the methylome of the pedigree founder at time t=1, and let π=[p1 p2 p3] be a row vector of probabilities corresponding to states u/u,u/m and m/m, respectively. Using Markov Chain theory, we have $$\begin{array}{@{}rcl@{}} Pr(s_{ijk}= 0|M_{\Theta}) &=& \left[ \pi \, \mathbf{G^{\prime}}^{(t_{ij}-1)} \right]_{1} \\ Pr(s_{ijk}= 0.5|M_{\Theta}) &=& \left[ \pi \, \mathbf{G^{\prime}}^{(t_{ij}-1)} \right]_{2} \\ Pr(s_{ijk}= 1|M_{\Theta}) &=& \left[ \pi \, \mathbf{G^{\prime}}^{(t_{ij}-1)} \right]_{3} \end{array} $$ In many situations, the most recent common ancestor happens to be the pedigree founder itself, so that tij=1. In the case where the methylome of the pedigree founder has been measured, the probabilities p1,p2 and p3 can be estimated directly from the data using x1N−1,x2N−1 and x3N−1, respectively. Here, x1,x2, and x3 are number of loci that are observed to be in states u/u,u/m,m/m, and N is the total number of loci. Typically, however, x2 is unknown as most DMP and DMR callers do not output epiheterozygous states (i.e., intermediate methylation calls). Instead, we therefore use $$\begin{array}{@{}rcl@{}} p_{1} = \frac{x_{1}}{N},\qquad p_{2} =\gamma \frac{x_{3}}{N},\qquad p_{3} = (1-\gamma) \frac{x_{3}}{N} \end{array} $$ where γ∈[0,1] is an unknown parameter. Model inference To obtain estimates for Θ, we seek to minimize the least-squares using $$\begin{array}{@{}rcl@{}} \nabla {\sum_{q=1}^{M}} \left(D_{q} - D^{\bullet}_{q}(M_{\Theta}) - c \right)^{2} = \mathbf{0}, \end{array} $$ where the summation is over all M unique pairs of sequenced samples in the pedigree. Minimization is performed using the "Nelder-Mead" algorithm as part of the optimx package in R. However, from our experience, convergence is not always stable, probably because the function \(D^{\bullet }_{q}(M_{\Theta })\) is complex and highly non-linear. We therefore include the following minimization constraint: $$\begin{array}{*{20}l} \nabla {\sum_{q=1}^{M}} \left(D_{q} - D^{\bullet}_{q}(M_{\Theta}) - c \right)^{2} \end{array} $$ $$\begin{array}{*{20}l} + M\left(\tilde{p_{1}} - p_{1}(t_{\infty}, M_{\Theta}) \right)^{2} = \mathbf{0}. \end{array} $$ Here, p1(t∞,MΘ) is the equilibrium proportion of u/u loci in the genome as t→∞. For a selfing system with w=1, we have that $$\begin{array}{@{}rcl@{}} p_{1}(t_{\infty}, M_{\Theta}) =\frac{\beta((1-\beta)^{2} - (1-\alpha)^{2} -1)}{(\alpha + \beta)((\alpha + \beta -1)^{2} - 2)}, \end{array} $$ and for a clonal/somatic system, it is: $$\begin{array}{@{}rcl@{}} p_{1}(t_{\infty}, M_{\Theta}) = \frac{\beta^{2}}{(\alpha+\beta)^{2}}. \end{array} $$ For the case where 0≤w<1, the equations are more complex and are omitted here. Note that the value \(\tilde {p_{1}}\) is an empirical guess at these equilibrium proportions. For samples whose methylomes can be assumed to be at equilibrium, we have that p1(t=1)=p1(t=2)=⋯=p1(t∞), meaning that the proportion of loci in the genome that are in state u/u are (dynamically) stable for any time t. Under this assumption, \(\tilde {p_{1}}\) can be replaced by \(\overline {p_{1}}\), which is the average proportion of u/u loci calculated from all pedigree samples. Confidence intervals We obtain confidence intervals for the estimated model parameters by boostrapping the model residuals. The procedure has the following steps: (1) For the qth sample pair q (q=1,⋯,M), we define a new response variable \(B_{q} = \hat {D_{q}} + \hat {\epsilon }_{k}\), where \(\hat {D_{q}}\) is the fitted divergence for the qth pair and \(\hat {\epsilon }_{k}\) is drawn at random and with replacement from the 1×M vector of fitted model residuals. (2) Refit the model using the new response variable and obtain estimates for the model parameters. (3) Repeat steps 1 to 2 a large number of times to obtain a bootstrap distribution. (4) Use the bootstrap distribution from 3 to obtain empirical confidence intervals. Testing for selection To assess whether a selection model provides a significantly better fit to the data compared to a neutral model, we define $$\begin{array}{@{}rcl@{}} RSS_{F}=\sum^{M}_{q=1} \epsilon_{q}(\hat{\Theta})^{2} \end{array} $$ $$\begin{array}{@{}rcl@{}} RSS_{R}= \sum^{M}_{q=1} \epsilon_{q}(\hat{\alpha}, \hat{\beta}, \hat{\gamma}, \hat{c} | w=1)^{2} \end{array} $$ to be the estimated residual sums of squares of the full model and reduced (i.e., neutral) model, respectively, with corresponding degrees of freedom dfF and dfR. To test for selection, we evaluate the following F-statistic: $$\begin{array}{@{}rcl@{}} F = \frac{(RSS_{R} - RSS_{F})}{RSS_{F}} \cdot \frac{df_{F}}{df_{N}}, \end{array} $$ where dfN=dfF−dfR. Under the Null F∼F(dfN,dfF). To illustrate the utility of our method, we used AlphaBeta to study "germline" epimutations in selfing- and asexually derived MA lines of Arabidopsis(A. thaliana) and dandelion (Taraxacum officinale), as well as somatic epimutations in a single poplar tree (Populus trichocarpa). Our goal was to demonstrate the wide range of application of our method and to highlight several novel insights into the nature of spontaneous epimutations in plants. Analysis of spontaneous epimutations in selfing-derived A. thaliana MA lines We first analyzed three A. thaliana MA pedigrees (MA1_1, MA1_3, MA3, see Fig. 2a). We chose these MA pedigrees because they differ markedly in their topologies, 5mC sampling strategies, sequencing method, and depth (Fig. 2a, b, Additional file 1: Table S1). All MA pedigrees were derived from a single Col-0 founder accession. The first MA pedigree (MA1_1) was originally published by Becker et al. [13]. The pedigree data consists of 11 independent lineages with sparsely collected WGBS samples (∼ 19.2X coverage) from generations 3, 31, and 32, and a maximum divergence time (Δt) of 64 generations. MA1_3 was previously published by van der Graaf et al. [12]. This data consists of single lineage with dense MethylC-seq measurements (∼ 13.8X coverage) from generations 18 to 30, and a maximum Δt of 13 generations. Finally, we present a new pedigree (MA3), which consists of 2 lineages with dense MethylC-seq measurements (∼ 20.8X coverage) from generations 0 to 11, and a maximum Δt of 22 generations. Unlike MA1_1 and MA1_3, MA3 has 5mC measurements from progenitor plants of each sampled generation, rather than from siblings of those progenitors (Fig. 2a). Further information regarding the samples, sequencing depths, and platforms is provided in Additional file 1: Table S1. A detailed description of data pre-processing and methylation state calling can be found in the "Materials and data pre-processing" section. Analysis of "germline" epimutations in A. thalianamutation accumulation (MA) lines. a Three different MA pedigrees were analyzed. All three pedigrees were derived from a single Columbia (Col-0) inbred genotype. Two of the pedigrees were previously published (MA1_1, Becker et al. 2011; MA1_3, van der Graaf et al. 2015), and one pedigree (MA3) is new. These three MA pedigrees were chosen because they differ in their topologies, 5mC measurement strategies, and the temporal resolution of the 5mC samples. b Overview of the data: N is the total number of sequenced samples; Seq depth is the average sequence depths of the samples; # TP is the number of unique time-points (or generations) that are sampled; max. (Δt) is the maximum divergence time (in generations) in the pedigree. c Application of models Abnull, Abneutral, ABmm, and Abuu. The best fitting model is indicated for each MA pedigree, sequence context (CG, CHG, and CHH), and genomic feature (global, exons, promoters, TEs). d Shown are the fits of the best fitting models for each pedigree and context. e Schematic representation of transgenerationally stable CHH epimutations. The barplots indicate the density of stable CHH epimutations in lineages L2 and L8 of the MA3 pedigree. f CHH sites featuring stable epimutations tend to fall outside of sRNA clusters in lineages L2 and L8. g Analysis of cmt2 mutant and Col-0 wt from Stroud et al. [2] show loss of methylation in the mutant at the stable CHH epimutation sites, indicating that these loci are targeted by CMT2. e Compared to the whole genome (wg), stable CHH loci with stable epimutations are enriched for CWA trinucleotides, which is a preferred substrate for CMT2 binding Spontaneous epimutations accumulate neutrally over generations We started by plotting genome-wide (global) 5mC divergence (D) against divergence time (Δt). D increases as a function of Δt in all pedigrees (Fig. 2d). A characteristic pattern is the rapid, non-linear increase in D for the first ∼ 8 generations followed by a nearly linear increase. As pointed out before [12], the initial non-linearity is driven by the stable segregation and fixation of epiheterozygote loci that originate from the pedigree founder, a phenomenon that has been well-described in the classical genetic theory of experimental line crosses [34–37]. By contrast, the subsequent linear increase in D is mainly due to the accumulation of new epimutations that arise de novo during inbreeding. The co-occurrence of these two processes is restricted to mutation accumulation systems that are propagated sexually. In clonally or asexually derived MA lines, the non-linear increase in D should be absent, as can indeed be seen in our later analysis of poplar and dandelion (see below). Another striking insight from the 5mC divergence patterns is that the increase in D is particularly pronounced for context CG but appears to be low, or even absent, at CHG and CHH loci. Similar observations have previously led to the hypothesis that the inheritance of spontaneous epimutations may be restricted to CG dinucleotides [11, 12], perhaps as a consequence of the preferential reinforcement of CHG and CHH methylation during sexual reproduction [38, 39]. Using heuristic arguments, it had been further suggested that CG epimutations accumulate neutrally, at least at the level of individual cytosines, meaning that 5mC gains and loss in this context are under no selective constraints [12]. However, these hypotheses have never been tested explicitly due to a lack of analytical tools. To address this, we fitted models ABneutral, ABmm, ABuu, and ABnull to the divergence data of each pedigree (Fig. 2c). As mentioned above (see the "The AlphaBeta method" section), model ABneutral assumes that spontaneous 5mC gains and losses accumulate neutrally across generations, ABmm assumes that the accumulation is partly shaped by selection against spontaneous losses of 5mC, ABuu assumes that the accumulation is partly shaped selection against spontaneous gains, and ABnull is the null model of no accumulation. Formal model comparisons revealed that ABneutral provides the best fit to the 5mC divergence data in context CG in all pedigrees (Fig. 2c, Additional files 2, 3, and 4: Tables S2-S4). This was true at the genome-wide scale (global) as well as at the sub-genomic scale (exons, promoters, TEs). Globally, ABneutral explained between 77 and 90% of the total variance in D, indicating that a neutral epimutation model provides a good and sufficient description of the molecular process that generates heritable 5mC changes at level of individual cytosines over time. Interestingly, we also detected, for the first time, highly significant accumulation of neutral epimutations in contexts CHG and CHH (Fig. 2c, Additional files 2, 3, and 4: Tables S2-S4). However, the detection of these accumulation patterns was mainly restricted to MA1_1, the largest of the three pedigrees in terms of both sample size (N=26) and divergence times (max. Δt=64), and to some extent also to MA3, the second largest of the three pedigrees (N=13, max. Δt=22). The detected accumulation of CHH epimutations was somewhat surprising, given that cytosine methylation in this context is typically targeted by the RNA-directed DNA methylation pathway (RdDM). The de novo action of this pathway should prevent the formation of trans-generationally stable epimutations, particularly those originating from DNA methylation loss [40]. To explore this observation in more detail, we inspected specific CHH sites that showed stable methylation status changes over generation time (Fig. 2e). Our analysis revealed that these CHH sites actually fall outside of known sRNA clusters and are therefore unlikely involved in RdDM (Fig. 2f). Instead, they appear to be targeted by CHROMOMETHYLASE 2 (CMT2), an enzyme that maintains methylation at a subset of CHG and CHH sites, independently of RdDM. Support for this hypothesis comes from the fact that these CHH sites are enriched for trinucleotide context CWA (W = A, T) (Fig. 2g), which is a preferred substrate for CMT2 binding [41]. Moreover, a re-analysis of a cmt2 methylation mutant from Stroud et al. [2] revealed a marked reduction in cytosine methylation at these CHH sites relative to wt (Fig. 2h), providing additional evidence for a maintenance role of CMT2 at these loci. Taken together, these results provide a possible molecular explanation for the accumulation of CHH epimutations over generation time, at least for specific CHH subcontexts. However, the ability to consistently detect these accumulation patterns from multi-generational pedigree data should be explored more systematically in future studies, particularly as a function of sample size, divergence time, and measurement uncertainly in 5mC divergence. The rate and spectrum of spontaneous CG, CHG, and CHH epimutations We examined the estimated epimutation rates corresponding to the best fitting models from above (Fig. 3a, Additional files 2, 3, and 4: Tables S2-S4). Globally, we found that the CG methylation gain rate (α) is 1.4·10−4 per CG per haploid genome per generation on average (range 8.6·10−5 to 1.94·10−4) and the loss rate (β) is 5.7·10−4 on average (range 2.5·10−4 to 8.3·10−4). Using data from pedigree MA1_1, we also obtained the first epimutation rate estimates for contexts CHG and CHH. The gain and loss rates for CHG were 3.5·10−6 and 5.8·10−5 per CHG per haploid genome per generation, respectively; and for CHH, they were 1.9·10−6 and 1.6·10−4 per CHH per haploid genome per generation. Hence, transgenerationally heritable CHG and CHH epimutations arise at rates that are about 1 to 2 orders of magnitude lower than CG epimutations in A. thaliana, which is reflected in the relatively slow increase of 5mC divergence in non-CG contexts over generation time (Fig. 2d). Comparisons of the CG epimutation rates and spectra. a Shown are the estimates (± 95% confidence intervals) of the genome-wide (global) CG epimutation rates for the different pedigrees. For comparison, we also show the range of previous estimates from A. thaliana MA lines (A. thaliana (2015)); see van der Graaf et al. (2015). In poplar, the estimated per-year epimutation rates were converted to per-generation rates by assuming a generation time of 15 years and 150 years. The gain and loss rates are all well within one order of magnitude of each other, and differences are mostly within estimation error. The dashed vertical lines mark off the lower and upper range of the point estimates. b Side-by-side comparison of "germlines" and somatic epimutation rate estimates (± 95% confidence intervals) in A. thaliana MA lines and poplar, respectively, for selected genomic features. The rank ordering of the magnitude of these rates is similar. For A. thaliana, the order of presentation of the pedigrees is MA3, MA1_3, and MA1_1 (from bottom to top within each feature). Feature-specific rates could not be obtained in dandelion since no annotated assembly is currently available In addition to global estimates, we also assessed the gain and loss rates for selected genomic features (exons, promoters, TEs). In line with previous analyses [12], we found striking and consistent rate differences, with exon-specific epimutation rates being 2 to 3 orders of magnitude higher than TE-specific rates (Fig. 3b, Additional files 2, 3, and 4: Tables S2-S4). Interestingly, this trend was not only restricted to CG sites, but was also present in contexts CHG and CHH. This later finding points to yet unknown sequence or chromatin determinants that affect the 5mC fidelity of specific regions across cell divisions, independently of CG, CHG, and CHH methylation pathways. We note that the CG epimutation rates reported here differ slightly from our previous estimates [12] (Fig. 3a, Additional files 3 and 4: Tables S3-S4). This small discrepancy is mainly the result of differences in the data pre-processing. Application of AlphaBeta to published pre-processed samples yielded similar results to those reported previously (data not shown), indicating that the statistical inference itself is consistent. Unlike past approaches, we here utilized the recent MethylStar pipeline [42] for data pre-processing and methylation state calling. The use of this pipeline leads to a substantial increase in the number of high-confidence cytosine methylation calls for downstream epimutation analysis (Additional file 5: Table S5). This boost in sample size is reflected in the lower variation in α and β estimates across MA pedigree compared with previous reports [12] (Fig. 3a, Additional files 2 and 3: Tables S2-S3). Analysis of spontaneous somatic epimutations in poplar Despite the above quantitative insights into the rate and spectrum of spontaneous epimutation in A. thaliana, it remains unclear how and where these epimutations actually originate in the plant life cycle. One hypothesis is that they are the result of imperfect 5mC maintenance during the mitotic replication of meristematic cells which give rise to all above and below ground tissues, including the "germline" (Additional file 6: Figure S1). As the germline is believed to be derived quite late in development from somatic precursors, somatic epimutations that accumulate during aging can subsequently be passed to offspring. An alternative hypothesis is that heritable epimutations originate as a byproduct of sRNA-mediated reinforcement errors in the sexual cell linages. One way to distinguish these two possibilities is to study epimutational processes in systems that bypass or exclude sexual reproduction. Long-lived perennials, such as trees, represent a powerful system to explore this. A tree's branching structure can be interpreted as an intra-organismal phylogeny of different somatic cell lineages. It is therefore possible to track mutations and epimutations and their patterns of inheritances across different tree sectors. Recently, there has been a surge of interest in characterizing somatic nucleotide mutations in trees using whole genome sequencing data [43–46]. These studies have shown that fixed mutations arise sequentially in different tree sectors, thus pointing at a shared meristematic origin. To facilitate the first insights into epimutational processes in long-lived perennials, we applied AlphaBeta to MethylC-seq leaf samples (∼41.1X coverage) from 8 separate branches of a single poplar (Populus trichocarpa) tree (see also co-submission, [26]). The tree features two main stems (here referred to as tree 13 and tree 14), which were originally thought to be two separate trees (Fig. 4a, b). However, both stems are stump sprouts off an older tree that was knocked down about 350 years ago. In other words, tree 13 and tree 14 are clones that have independently diverged for a long time. Four branches from each tree were chosen and aged by coring at the points where each branch meets the main stem as well as at the terminal branch (Fig. 4a, b, see the "Materials and data pre-processing" section). Age dating of the bottom sector of the tree proved particularly challenging because of heart rot, rendering estimates of the total tree age imprecise. However, an estimate based on diameter measurements places the minimum age of the tree at about 250 years. Analysis of somatic epimutations in poplar. a A single poplar (P. trichocarpa) tree was analyzed. Tree 13 and 14 are two main stems that have diverged early in development. Four branches from each tree were chosen and aged by coring. b Shown are the coring sites along with the coring-based branch ages. Age coring proved technically challenging at the bottom of the tree and led to unintelligible ring counts. An educated guess places the age of the tree between 250 and 350 years. c The tree can be presented as an intra-organismal phylogeny. Leaf methylomes were collected from each of the selected branches and served as input for AlphaBeta. d Overview of the data: N is the total number of sequenced samples; Seq depth is the average sequence depths of the samples; # TP is the number of unique time-points that are sampled; max. (Δt) is the maximum divergence time (in years) between leaf samples. eAlphaBeta was fitted to the global CG methylation divergence data of the complete tree data treating tree age as an unknown parameter. Model residual (LSQ) was minimized at an age of 330 years, which is our estimate of the age of the tree. f Model comparisons indicate that somatic epimutations accumulate neutrally in context CG (red) and CHG (orange) during aging, both at the global scale as well as within specific genomic features (exons, promoters, TEs). g, h Shown are the fits of model ABneutral to the global CG (red) and CHG (orange) methylation divergence data of the complete tree (intra-tree + inter-tree, g), as well as for tree 13 and tree 14 separately (intra-tree, h) Inferring total tree age from leaf methylome data We used the coring-based age measurements from each of the branches along with the branch points to calculate divergence times (Δt) between all pairs of leaf samples (Fig. 4c). We did this by tracing back their ages (in years) along the branches to their most recent common branch point (i.e., "founder cells") (Additional file 6: Figure S1). The calculation of the divergence times for pairs of leaf samples originating from tree 13 and tree 14 was not possible since the total age of the tree was unknown. To solve this problem, we included the total age of the tree as an additional unknown parameter into our epimutation models. Our model estimates revealed that the total age of the tree is approximately 330 years (Fig. 4e), an estimate that fits remarkably well with the hypothesized age window (between 250 and 350 years). Furthermore, the model fits provided overwhelming evidence that somatic epimutations, in poplar, accumulate in a selectively neutral fashion during aging, both at the genome-wide scale (globally) as well as at the sub-genomic scale (exons, promoters, TEs) (Fig. 4f, see also co-submission [26]). This was true for CG and CHG contexts (Fig. 4g). The fact that the accumulation of CHG epimutations is so clearly detectable in poplar, but only inconsistently in A. thaliana MA lines, could indicate that somatically acquired CHG methylation changes experience some level of reprogramming during sexual reproduction. But this hypothesis should be tested more directly using cell-type-specific sequencing approaches. To rule out that the somatic accumulation patterns in poplar are not dominated by our estimate of tree age, we also examined the accumulation patterns within tree 13 and tree 14 separately. We found similar accumulation slopes as well as epimutation rates (Fig. 4h, see also co-submission [26]). Epimutation spectra have a somatic origin We examined the somatic epimutation rate estimates from the complete tree analysis. At the genome-wide scale, we found that the 5mC gain and loss rates in context CG are 1.7·10−6 and 5.8·10−6 per site per haploid genome per year, respectively, and 3.3·10−7 and 4.1·10−6 in context CHG. Interestingly, these per-year CG epimutation rates are only about two orders of magnitude lower than the per-generation rates in A. thaliana MA lines. Assuming an average generation time of about 15 to 150 years in poplar [47], its expected per-generation CG epimutation rate would be between ∼10−5 and ∼10−4, which is within the same order of magnitude to that of A. thaliana (∼10−4) (Fig. 3a). This close similarity is remarkable given that poplar is about ∼ 100 times larger and its life cycle ∼ 1000 times longer than that of A. thaliana. Similar insights were reached in a recent comparison of the per-generation nucleotide mutation rates between Oak (Quercus rubur) and A. thaliana [45], which were also found to be remarkably close to each other. Taken together, these findings support the emerging hypothesis that meristematic cells of long-lived perennials undergo fewer cell divisions per unit time than annuals, so that the cumulative life-time number of cell divisions is similar [46]. This hypothesis should be tested more directly using cell count assays. To assess whether the accumulation dynamics of somatic epimutations in poplar differs between genomic features, we examined in more detail the estimated rates and spectra for exons, promoters, and TEs (Fig. 3b). Focusing on context CG, we found considerable rate differences. The gain rates for exons, promoters, and TEs were 2.4·10−6,1.1·10−6, and 7.5·10−7 per site per haploid genome per year, respectively, and the loss rates were 2·10−5,8·10−6, and 2.8·10−7. Intriguingly, the rank order of these rates was similar to what we had observed for germline epimutations in A. thaliana, with exons showing the highest combined rates, followed by promoters and then TEs (Fig. 3b). These findings indicate that the epimutation spectrum is deeply conserved across angiosperms and that it is mainly shaped during somatic development, rather than being a byproduct of selective reinforcement of DNA methylation in the germline or early zygote. Identifying cis- and trans-determinants that affect local epimutation rates seems to be an important next challenge [11]. Analysis spontaneous epimutations in asexually derived dandelion MA lines Our analysis of A. thaliana and poplar revealed strong similarities in epimutation rates and spectra. To facilitate further inter-specific comparisons, particulary across different mating systems, we generated novel MA lines in an asexual dandelion (Taraxacum officinale) genotype (AS34) [48] (Fig. 5a). Apomictic dandelions are triploid and produce asexually via clonal seeds in a process that involves unreduced egg cell formation (diplospory), parthenogenic embryo development, and autonomous endosperm formation, resulting in genetically identical offspring [49]. Using single-seed descent from a single apomictic triploid founder genotype, 8 replicated lineages were propagated for 6 generations, and 5mC measurements were obtained from each generation (Fig. 5a). Analysis of CG epimutations in apomictic dandelion. a Using single-seed descent from a single apomictic triploid founder genotype, 8 replicated lineages were propagated for 6 generations. DNA methylation measurements were obtained using epigenotyping by sequencing (epiGBS). b Overview of the data: N is the total number of sequenced samples; Seq depth is the average sequence depths of the samples; # TP is the number of unique time-points that are sampled; max. Δt is the maximum divergence time (in generations) between samples. *Note: the calculation of average read coverage was based only on interrogated cytosines as epiGBS does not yield any genome-wide data. c Model fits to the CG divergence data. Highly significant increases in 5mC divergence (D) over generation time (Δt) were detected in all sequence contexts, despite the relatively large variation in 5mC divergence patterns (see text) The total dataset was relatively large, with 48 sequenced samples and a maximum divergence time of 14 generations (Fig. 5b). 5mC measurements were obtained using epigenotyping-by-sequencing (epiGBS) [32] (see the "Materials and data pre-processing" section). Since there is currently no published dandelion reference assembly, local assemblies were generated de novo from the epiGBS short reads and served as basis for cytosine methylation calling [32]. With this approach, ∼ 24000 measured cytosines were shared between any two sample pairs on average and were used to calculate pair-wise CG methylation divergence D. Plotting D against divergence time (Δt) revealed considerable measurement variation across samples (Fig. 5c). This large variation could have several possible sources: First, methylation state calling was based on local assemblies rather than on reference-based alignments. Second, epiheterozygotes in this triploid genotype could not be effectively distinguished on the basis of the observed methylation levels, which introduce uncertainties in the calculation D. Third, early implementations of the epiGBS protocol could not distinguish PCR duplicates, a problem that has since been solved [50]. Despite these limitations, application of AlphaBeta to the CG divergence data revealed strong statistical evidence for epimutation accumulation over time (F941,945=6.68, p< 0.0001). Consistent with A. thaliana and poplar, a neutral epimutation model (ABneutral) provided the best fit to the data. Based on these model fits, we estimate the global CG gain rate and loss rate at 6.9·10−4 and 1.4·10−3 per CG site per haploid genome per generation, respectively (Fig. 3). We note that these "per-haploid" rate estimates are slightly biased upward, since we applied AlphaBeta's diploid models to data from a triploid species, but this model mis-specification should have little impact in the analysis of asexually reproducing systems in which genetic segregation is absent. Keeping this caveat in mind, our results show that the dandelion per-generation CG epimutation rates are close to those obtained in A. thaliana and poplar (Fig. 3a), and at least within the same order of magnitude. This finding reinforces the notion that epimutational process are largely conserved across angiosperms, which is probably a direct consequence of the fact that the DNA methylation maintenance machinery is itself highly conserved [5, 51]. Moreover, our findings in dandelion lend further support to the hypothesis that sexual reproduction has no major impact on the formation and inheritance of spontaneous epimutations. Future studies should test this hypothesis more directly by studying the epimutation landscape of a fixed genotype that has been propagated in parallel both sexually and asexually. Accurate estimates of the rate and spectrum of spontaneous epimutations are essential for understanding how DNA methylation diversity arises in the context of plant evolution, development, and aging. Here, we presented AlphaBeta, a computational method for obtaining such estimates from pedigree-based high-throughput DNA methylation data. Our method requires that the topology of the pedigree is known. This requirement is typically met in the experimental construction of mutation accumulation lines (MA lines) that are derived through sexual or clonal reproduction. However, we demonstrated that AlphaBeta can also be used to study somatic epimutations in long-lived perennials, such as trees, using leaf methylomes and coring data as input. In this case, our method treats the tree branching structure as an intra-organismal phylogeny of somatic lineages and uses information about the epimutational history of each branch. To demonstrate the versatility of our method, we applied AlphaBeta to very diverse plant systems, including multi-generational DNA methylation data from selfing- and asexually derived MA lines of A. thaliana and dandelion, as well as intra-generational DNA methylation data of a poplar tree. Our analysis led to several novel insights about epimutational processes in plants. One of the most striking findings was the close similarity in the epimutation landscapes between these very different systems. Close similarities were observed in the per-generation CG epimutation rates between A. thaliana, dandelion, and poplar both at the genome-wide as well as at the subgenomic scale. Any detected rate differences between these different systems were all within one order of a magnitude of each other, and as such practically indistinguishable from experimental sources of variation. As a reference, variation in epimutation rate estimates across different A. thaliana mutation accumulation experiments vary up to 75% of an order of a magnitude. Clearly, larger sample sizes are needed along with controlled experimentally comparisons to be able to identify potential biological causes underlying subtle epimutation rate differences between species, mating systems, genotypes, or environmental treatments. Furthermore, the close similarity between sexual and asexual (or somatic) systems reported here provide indirect evidence that transgenerationally heritable epimutations originate mainly during mitotic rather than during meiotic cell divisions in plants. Our application of AlphaBeta to poplar also provided the first proof-of-principle demonstration that leaf methylome data, in combination with our statistical models, can be employed as a molecular clock to age-date trees or sectors of trees. Analytically, this is similar to inferring the branch lengths of the underlying pedigree (or phylogeny). With sufficiently large sample sizes, it should be possible to achieve this with relatively high accuracy and extend this inference to the entire tree structure. The comparatively high rates of somatic and germline epimutations are instrumental in this as they provide increased temporal resolution over classical DNA sequence approaches, which rely on rare de novo nucleotide mutations. Our methodological approach should be applicable, more generally, to any perennial or long-lived species. We are currently extending the AlphaBeta tool set to facilitate such analyses. Analytically, AlphaBeta is not restricted to the analysis of plant data. The method could also be used to study epimutational processes in tumor clones based on animal single-cell WGBS data. Such datasets are rapidly emerging [52]. In this context, AlphaBeta could be instrumental in the inference of clonal phylogenies and help calibrate them temporally. Such efforts may complement current pseudotemporal ordering (or trajectory inference) methods and lineage tracing strategies in single-cell methylation data [53, 54]. The implementation of AlphaBeta is relatively straight-forward. The starting point of the method are methylation state calls for each cytosine. These can be obtained from any methylation calling pipeline. In the data applications presented here, we used AlphaBeta in conjunction with MethylStar [42], which is an efficient pre-processing pipeline for the analysis of WGBS data and features a HMM-based methylation state caller [55]. Application of this pipeline leads to up a substantial increase in the number of high-confidence cytosine methylation calls for epimutation rate inference compared with more conventional methods. We therefore recommend using AlphaBeta in conjunction with MethylStar. Software implementing AlphaBeta is available as a Bioconductor R package at https://bioconductor.org/packages/release/bioc/html/AlphaBeta.html. Materials and data pre-processing A. thalianaMA lines data For MA3, seeds were planted and grown in 16-h day lengths and samples were harvested from young above ground tissue. Tissue was flash frozen in liquid nitrogen and DNA was isolated using a Qiagen Plant DNeasy kit (Qiagen, Valencia, CA, USA) according to the manufacturer's instructions. For MA1_1 and MA1_3, a detailed description of growth conditions and plant material can be found in the original publications [12, 13]. Sequencing and data processing For MA3, MethylC-seq libraries were prepared according to the protocol described in Urich et al. [56]. Libraries were sequenced to 150 bp per read at the Georgia Genomics & Bioinformatics Core (GGBC) on a NextSeq500 platform (Illumina). Average sequencing depth was 20.8X among samples (Additional file 1: Table S1). For MA1_1 and MA1_3, FASTQ files (*.fastq) were downloaded from https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE64463. All data processing and methylation state calling was performed using the MethylStar pipeline [42]. Summary statistic for each sample can be found in Additional file 1: Table S1. All sequences have been submitted to the GEO repository with the following GEO accession number GSE153055. Poplar data Tree coring The tree used in this study was located at Hood River Ranger District [Horse Thief Meadows area], Mt. Hood National Forest, 0.6 mi south of Nottingham Campground off OR-35 at unmarked parking area, \(\phantom {\dot {i}\!}500^{\prime }\) west of East Fork Trail nbr. 650 across river, ca. 45.355313, -121.574284. Tree cores were originally collected from the main stem and five branches in April 2015 at breast height (∼1.5 m) for standing tree age using a stainless-steel increment borer (5 mm in diameter and up to 28 cm in length). Cores were mounted on grooved wood trim, dried at room temperature, sanded, and stained with 1% phloroglucinol following the manufacturer's instructions (https://www.forestry-suppliers.com/Documents/1568_msds.pdf). Annual growth rings were counted to estimate age. For cores for which accurate estimates could not be made from the 2015 collection, additional collections were made in spring 2016. However, due to difficulty in collecting by climbing, many of the cores did not reach the center of the stem or branches (pith) and/or the samples displayed heart rot. Combined with the difficulty in demarcating rings in porous woods such as poplar Populus, accurate measures of tree age or branch age were challenging. A single MethylC-seq library was created for each branch from leaf tissue. Libraries were prepared according to the protocol described in Urich et al. [56]. Libraries were sequenced to 150 bp per read at the Georgia Genomics & Bioinformatics Core (GGBC) on a NextSeq500 platform (Illumina). Average sequencing depth was 41.1x among samples. MethylC-seq reads were aligned using Methylpy v1.3.2 [57]. Alignment was to the new Stettler14 assembly of P. trichocarpa, as described in [26]. Starting from the BAM files (*.bam), the MethylStar pipeline [42] was used for further data processing and methylation state calling. All sequences have been deposited in SRA (see [26]). Dandelion MA lines data Starting from a single founder individual, eight replicate lineages of the apomictic common dandelion (Taraxacum officinale) genotype AS34 [48] were grown for six generations via single-seed descent under common greenhouse conditions. Apomictic dandelions are triploid and produce asexually via clonal seeds in a process that involves unreduced egg cell formation (diplospory), parthenogenic embryo development, and autonomous endosperm formation, resulting in genetically identical offspring [49]. Seeds were collected from each of the 48 plants in the six-generation experiment and stored under controlled conditions (15 ∘C and 30% RH). After the 6th generation, from each plant in the pedigree, a single offspring individual was grown in a fully randomized experiment under common greenhouse conditions. Leaf tissue from a standardized leaf was collected after 5 weeks, flash frozen in liquid nitrogen, and stored at − 80 ∘C until processing. DNA was isolated using the Macherey-Nagel Nucleospin Plant II kit (cell lysis buffer PL1). DNA was digested with the PstI restriction enzyme and epiGBS sequencing libraries were prepared as described elsewhere [32]. Based on genotyping-by-sequencing [58], epiGBS is a multiplex reduced representation bisulphite sequencing (RRBS) approach with an analysis pipeline that allows for local reference construction from bisulphite reads, which makes the method applicable to species for which a reference genome is lacking [32]. PstI is a commonly used restriction enzyme for genotyping-by-sequencing; however, its activity is sensitive to CHG methylation in CTGCAG recognition sequence. This makes the enzyme better at unbiased quantification of CG methylation than of CHG methylation [32]. After quantification of the sequencing libraries using a multiplexed Illumina MiSeq Nano run, samples were re-pooled to achieve equal representation in subsequent epiGBS library sequencing. The experimental samples were sequenced on two Illumina HiSeq 2500 lanes (125 cycles paired-end) as part of a larger epiGBS experiment which consisted of a total of 178 samples that were randomized over the two lanes. Because of inadequate germination or due to low sequencing output (library failure), four of the 48 samples were not included in the downstream analysis. All sequences have been deposited in SRA under Bioproject: PRJNA608438. The biosamples include SAMN14266774 to 778, SAMN14266797 to 802, SAMN14266821 to 826, SAMN14266845 to 850, SAMN14266869 to 872, SAMN14266874, SAMN14266893 to 894, SAMN14266896 to 897, SAMN14266916 to 921, and SAMN14266940 to 945. These 44 samples have been submitted as part of a bigger experiment of 178 samples total. DNA methylation analysis Sequencing reads were demultiplexed (based on custom barcodes) and mapped against a dandelion pseudo-reference sequence that was generated de novo from PstI-based epiGBS [32]. This pseudo-reference contains the local reference of PstI-based epiGBS fragments as inferred from the bisulphite reads. Methylation variant calling was based on SAMtools mpileup and custom python scripts, following a similar approach as described in van Gurp et al. [32]. For downstream analysis, we included only those cytosines that were called in at least 80% of the samples. In addition, cytosine positions that did not pass the filtering criteria for all generations were removed. To obtain methylation status calls, we implemented a one-tail binomial test as previously described [12]. Multiple testing correction was performed using the Benjamini-Yekutiely method [59], and the false discovery rate (FDR) was controlled at 0.05. All statistical tests for obtaining methylation status calls of the samples were conducted within the SciPy ecosystem. AlphaBeta [60] is an open source R package licensed under GPL-3. It is freely and openly available from the Github website (https://github.com/jlab-code/AlphaBeta) under GNU General Public License v3.0, and it is part of Bioconductor [61]. Schmitz RJ. AlphaBeta: Computational inference of epimutation rates and spectra from high-throughput DNA methylation data in plants. GSE153055. https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE153055(2020) [62]. Van Gurp TP, Wagemaker NCAM, Verhoeven KJF. Epimutation accumulation experiment in two Taraxacum officinale apomicts. BioProject PRJNA608438. https://www.ncbi.nlm.nih.gov/sra/?term=PRJNA608438. WGBS: Whole-genome bisulfite sequencing Transposable elements MA lines: Mutation accumulation lines RRBS: Reduced representation bisulphite sequencing epiGBS: Epigenotyping by sequencing 5mC: 5-Methyl cytosine RdDM: RNA-directed DNA methylation pathway CMT2: CHROMOMETHYLASE 2 HMM: Hidden Markov model Law JA, Jacobsen SE. Establishing, maintaining and modifying DNA methylation patterns in plants and animals. Nat Rev Genet. 2010; 11(3):204–20. https://doi.org/10.1038/nrg2719. Stroud H, Greenberg MVC, Feng S, Bernatavichute YV, Jacobsen SE. Comprehensive analysis of silencing mutants reveals complex regulation of the Arabidopsis methylome. Cell. 2013; 152(1-2):352–64. https://doi.org/10.1016/j.cell.2012.10.054. Bewick AJ, Hofmeister BT, Powers RA, Mondo SJ, Grigoriev IV, James TY, Stajich JE, Schmitz RJ. Diversity of cytosine methylation across the fungal tree of life. Nat Ecol Evol. 2019; 3(3):479. https://doi.org/10.1038/s41559-019-0810-9. Feng S, Cokus SJ, Zhang X, Chen P-Y, Bostick M, Goll MG, Hetzel J, Jain J, Strauss SH, Halpern ME, Ukomadu C, Sadler KC, Pradhan S, Pellegrini M, Jacobsen SE. Proc Natl Acad Sci USA. 2010; 107(19):8689–94. https://doi.org/10.1073/pnas.1002720107. Niederhuth CE, Bewick AJ, Ji L, Alabady MS, Kim KD, Li Q, Rohr NA, Rambani A, Burke JM, Udall JA, Egesi C, Schmutz J, Grimwood J, Jackson SA, Springer NM, Schmitz RJ. Widespread natural variation of DNA methylation within angiosperms. Genome Biol. 2016; 17. https://doi.org/10.1186/s13059-016-1059-0. Takuno S, Ran J-H, Gaut BS. Evolutionary patterns of genic DNA methylation vary across land plants. Nat Plants. 2016; 2(2):15222. https://doi.org/10.1038/nplants.2015.222. CAS PubMed Article Google Scholar Zemach A, McDaniel IE, Silva P, Zilberman D. Genome-wide evolutionary analysis of eukaryotic DNA methylation. Science (New York, NY). 2010; 328(5980):916–9. https://doi.org/10.1126/science.1186366. Field AE, Robertson NA, Wang T, Havas A, Ideker T, Adams PD. DNA methylation clocks in aging: categories, causes, and consequences. Mol Cell. 2018; 71(6):882–95. https://doi.org/10.1016/j.molcel.2018.08.008. Calarco JP, Borges F, Donoghue MTA, Van Ex F, Jullien PE, Lopes T, Gardner R, Berger F, Feijó JA, Becker JD, Martienssen RA. Reprogramming of DNA methylation in pollen guides epigenetic inheritance via small RNA. Cell. 2012; 151(1):194–205. https://doi.org/10.1016/j.cell.2012.09.001. Walker J, Gao H, Zhang J, Aldridge B, Vickers M, Higgins JD, Feng X. Sexual-lineage-specific DNA methylation regulates meiosis in Arabidopsis. Nat Genetics. 2018; 50(1):130. https://doi.org/10.1038/s41588-017-0008-5. Johannes F, Schmitz RJ. Spontaneous epimutations in plants. New Phytologist. 2019; 221(3):1253–9. https://doi.org/10.1111/nph.15434. Graaf AVD, Wardenaar R, Neumann DA, Taudt A, Shaw RG, Jansen RC, Schmitz RJ, Colomé-Tatché M, Johannes F. Rate, spectrum, and evolutionary dynamics of spontaneous epimutations. Proc Natl Acad Sci. 2015; 112(21):6676–81. https://doi.org/10.1073/pnas.1424254112. PubMed Article CAS PubMed Central Google Scholar Becker C, Hagmann J, Müller J, Koenig D, Stegle O, Borgwardt K, Weigel D. Spontaneous epigenetic variation in the Arabidopsis thaliana methylome. Nature. 2011; 480(7376):245–9. https://doi.org/10.1038/nature10555. Schmitz RJ, Schultz MD, Lewsey MG, O'Malley RC, Urich MA, Libiger O, Schork NJ, Ecker JR. Transgenerational epigenetic instability is a source of novel methylation variants. Science (New York, NY). 2011; 334(6054):369–73. https://doi.org/10.1126/science.1212959. Ossowski S, Schneeberger K, Lucas-Lledó JI, Warthmann N, Clark RM, Shaw RG, Weigel D, Lynch M. The rate and molecular spectrum of spontaneous mutations in Arabidopsis thaliana. Science (New York, NY). 2010; 327(5961):92–4. https://doi.org/10.1126/science.1180677. Weng M-L, Becker C, Hildebrandt J, Neumann M, Rutter MT, Shaw RG, Weigel D, Fenster CB. Fine-grained analysis of spontaneous mutation spectrum and frequency in Arabidopsis thaliana. Genetics. 2019; 211(2):703–14. https://doi.org/10.1534/genetics.118.301721. Vidalis A, živković D, Wardenaar R, Roquis D, Tellier A, Johannes F. Methylome evolution in plants. Genome Biol. 2016; 17(1):264. https://doi.org/10.1186/s13059-016-1127-5. PubMed PubMed Central Article CAS Google Scholar Hofmeister BT, Lee K, Rohr NA, Hall DW, Schmitz RJ. Stable inheritance of DNA methylation allows creation of epigenotype maps and the study of epiallele inheritance patterns in the absence of genetic variation. Genome Biol. 2017; 18(1):155. https://doi.org/10.1186/s13059-017-1288-x. Hagmann J, Becker C, Müller J, Stegle O, Meyer RC, Wang G, Schneeberger K, Fitz J, Altmann T, Bergelson J, Borgwardt K, Weigel D. Century-scale methylome stability in a recently diverged Arabidopsis thaliana lineage. PLoS Genet. 2015; 11(1):1004920. https://doi.org/10.1371/journal.pgen.1004920. Schmid MW, Heichinger C, Schmid DC, Guthörl D, Gagliardini V, Bruggmann R, Aluri S, Aquino C, Schmid B, Turnbull LA, Grossniklaus U. Contribution of epigenetic variation to adaptation in Arabidopsis. Nat Commun. 2018; 9(1):1–12. https://doi.org/10.1038/s41467-018-06932-5. Taudt A, Colomé-Tatché M, Johannes F. Genetic sources of population epigenomic variation. Nat Rev Genet. 2016; 17(6):319–32. https://doi.org/10.1038/nrg.2016.45. Jiang C, Mithani A, Belfield EJ, Mott R, Hurst LD, Harberd NP. Environmentally responsive genome-wide accumulation of de novo Arabidopsis thaliana mutations and epimutations. Genome Res. 2014; 24(11):1821–9. https://doi.org/10.1101/gr.177659.114. Ganguly DR, Crisp PA, Eichten SR, Pogson BJ. The Arabidopsis DNA methylome is stable under transgenerational drought stress. Plant Phys. 2017; 175(4):1893–912. https://doi.org/10.1104/pp.17.00744. Zheng X, Chen L, Xia H, Wei H, Lou Q, Li M, Li T, Luo L. Transgenerational epimutations induced by multi-generation drought imposition mediate rice plant's adaptation to drought condition. Sci Rep. 2017; 7:39843. https://doi.org/10.1038/srep39843. Lanfear R. Do plants have a segregated germline?PLOS Biol. 2018; 16(5):2005439. https://doi.org/10.1371/journal.pbio.2005439. Hofmeister BT, et al.A genome assembly and the somatic genetic and epigenetic mutation rate in a wild long-lived perennial Populus trichocarpa. Genome Biol. 2020. https://doi.org/10.1186/s13059-020-02162-5. Horvath R, Laenen B, Takuno S, Slotte T. Single-cell expression noise and gene-body methylation in Arabidopsis thaliana. Heredity. 2019; 1. https://doi.org/10.1038/s41437-018-0181-z. Secco D, Wang C, Shou H, Schultz MD, Chiarenza S, Nussaume L, Ecker JR, Whelan J, Lister R. Stress induced gene expression drives transient DNA methylation changes at adjacent repetitive elements. eLife. 2015; 4. https://doi.org/10.7554/eLife.09343. Cokus SJ, Feng S, Zhang X, Chen Z, Merriman B, Haudenschild CD, Pradhan S, Nelson SF, Pellegrini M, Jacobsen SE. Shotgun bisulphite sequencing of the Arabidopsis genome reveals DNA methylation patterning. Nature. 2008; 452(7184):215–9. https://doi.org/10.1038/nature06745. Lister R, O'Malley RC, Tonti-Filippini J, Gregory BD, Berry CC, Millar AH, Ecker JR. Highly integrated single-base resolution maps of the epigenome in Arabidopsis. Cell. 2008; 133(3):523–36. https://doi.org/10.1016/j.cell.2008.03.029. Meissner A, Gnirke A, Bell GW, Ramsahoye B, Lander ES, Jaenisch R. Reduced representation bisulfite sequencing for comparative high-resolution DNA methylation analysis. Nucleic Acids Res. 2005; 33(18):5868–77. https://doi.org/10.1093/nar/gki901. van Gurp TP, Wagemaker NCAM, Wouters B, Vergeer P, Ouborg JNJ, Verhoeven KJF. epiGBS: reference-free reduced representation bisulfite sequencing. Nat Methods. 2016; 13(4):322–4. https://doi.org/10.1038/nmeth.3763. Colomé-Tatché M, Johannes F. Signatures of Dobzhansky–Muller incompatibilities in the genomes of eecombinant inbred lines. Genetics. 2016; 202(2):825–41. https://doi.org/10.1534/genetics.115.179473. Broman KW. Genotype probabilities at intermediate generations in the construction of recombinant Inbred Lines. Genetics. 2012; 190(2):403–12. https://doi.org/10.1534/genetics.111.132647. Johannes F, Colomé-Tatché M. Quantitative epigenetics through epigenomic perturbation of isogenic lines. Genetics. 2011; 188(1):215–27. https://doi.org/10.1534/genetics.111.127118. Bartlett MS, Haldane JBS. The theory of inbreeding with forced heterozygosis. J Genet. 1935; 31(3):327. https://doi.org/10.1007/BF02982404. Ronald Aylmer Fisher. The theory of inbreeding. Edinburgh: Oliver and Boyd; 1949. Kawashima T, Berger F. Epigenetic reprogramming in plant sexual reproduction. Nat Rev Genet. 2014; 15(9):613–24. https://doi.org/10.1038/nrg3685. Gehring M. Epigenetic dynamics during flowering plant reproduction: evidence for reprogramming?New Phytol. https://doi.org/10.1111/nph.15856. Teixeira FK, Heredia F, Sarazin A, Roudier F, Boccara M, Ciaudo C, Cruaud C, Poulain J, Berdasco M, Fraga MF, Voinnet O, Wincker P, Esteller M, Colot V. A role for RNAi in the selective correction of DNA methylation defects. Science. 2009; 323(5921):1600–4. https://doi.org/10.1126/science.1165313. Gouil Q, Baulcombe DC. DNA methylation signatures of the plant chromomethyltransferases. PLOS Genet. 2016; 12(12):1006526. https://doi.org/10.1371/journal.pgen.1006526. Shahryary Y, Hazarika RR, Johannes F. Methylstar: a fast and robust pre-processing pipeline for bulk or single-cell whole-genome bisulfite sequencing data. BMC Genomics. 2020; 21(1):479. Wang L, Ji Y, Hu Y, Hu H, Jia X, Jiang M, Zhang X, Zhao L, Zhang Y, Jia Y, Qin C, Yu L, Huang J, Yang S, Hurst LD, Tian D. The architecture of intra-organism mutation rate variation in plants. PLOS Biol. 2019; 17(4):3000191. https://doi.org/10.1371/journal.pbio.3000191. Hanlon VCT, Otto SP, Aitken SN. Somatic mutations substantially increase the per-generation mutation rate in the conifer Picea sitchensis. Evol Lett. https://doi.org/10.1002/evl3.121. Schmid-Siegert E, Sarkar N, Iseli C, Calderon S, Gouhier-Darimont C, Chrast J, Cattaneo P, Schütz F, Farinelli L, Pagni M, Schneider M, Voumard J, Jaboyedoff M, Fankhauser C, Hardtke CS, Keller L, Pannell JR, Reymond A, Robinson-Rechavi M, Xenarios I, Reymond P. Low number of fixed somatic mutations in a long-lived oak tree. Nat Plants. 2017; 3(12):926. https://doi.org/10.1038/s41477-017-0066-9. Orr AJ, Padovan A, Kainer D, Külheim C, Bromham L, Bustos-Segura C, Foley W, Haff T, Hsieh J-F, Morales-Suarez A, Cartwright RA, Lanfear R. A phylogenomic approach reveals a low somatic mutation rate in a long-lived plant. bioRxiv. 2019:727982. https://doi.org/10.1101/727982. Ingvarsson PK. Multilocus patterns of nucleotide polymorphism and the demographic history of Populus tremula. Genetics. 2008; 180(1):329–40. https://doi.org/10.1534/genetics.108.090431. Verhoeven KJF, Van Dijk PJ, Biere A. Changes in genomic methylation patterns during the formation of triploid asexual dandelion lineages. Mol Ecol. 2010; 19(2):315–24. https://doi.org/10.1111/j.1365-294X.2009.04460.x. Koltunow A. Apomixis: embryo sacs and embryos formed without meiosis or fertilization in ovules,. Plant Cell. 1993; 5(10):1425–37. Moorsel S. J. v., Schmid MW, Wagemaker NCAM, Gurp T. v., Schmid B, Vergeer P. Evidence for rapid evolution in a grassland biodiversity experiment. bioRxiv. 2018:262303. https://doi.org/10.1101/262303. Bewick AJ, Niederhuth CE, Ji L, Rohr NA, Griffin PT, Leebens-Mack J, Schmitz RJ. The evolution of CHROMOMETHYLASES and gene body DNA methylation in plants. Genome Biol. 2017; 18(1):65. https://doi.org/10.1186/s13059-017-1195-1. Gaiti F, Chaligne R, Gu H, Brand RM, Kothen-Hill S, Schulman RC, Grigorev K, Risso D, Kim K-T, Pastore A, Huang KY, Alonso A, Sheridan C, Omans ND, Biederstedt E, Clement K, Wang L, Felsenfeld JA, Bhavsar EB, Aryee MJ, Allan JN, Furman R, Gnirke A, Wu CJ, Meissner A, Landau DA. Epigenetic evolution and lineage histories of chronic lymphocytic leukaemia. Nature. 2019; 1. https://doi.org/10.1038/s41586-019-1198-z. Danese A, Richter ML, Fischer DS, Theis FJ, Colomé-Tatché M. EpiScanpy: integrated single-cell epigenomic analysis. bioRxiv. 2019:648097. https://doi.org/10.1101/648097. Saelens W, Cannoodt R, Todorov H, Saeys Y. A comparison of single-cell trajectory inference methods. Nat Biotechnol. 2019; 37(5):547–54. https://doi.org/10.1038/s41587-019-0071-9. Taudt A, Roquis D, Vidalis A, Wardenaar R, Johannes F, Colomé-Tatché M. METHimpute: imputation-guided construction of complete methylomes from WGBS data. BMC Genomics. 2018; 19(1):444. https://doi.org/10.1186/s12864-018-4641-x. Urich MA, Nery JR, Lister R, Schmitz RJ, Ecker JR. MethylC-seq library preparation for base-resolution whole-genome bisulfite sequencing. Nat Protocol. 2015; 10(3):475–83. https://doi.org/10.1038/nprot.2014.114. Schultz MD, He Y, Whitaker JW, Hariharan M, Mukamel EA, Leung D, Rajagopal N, Nery JR, Urich MA, Chen H, Lin S, Lin Y, Jung I, Schmitt AD, Selvaraj S, Ren B, Sejnowski TJ, Wang W, Ecker JR. Human body epigenome maps reveal noncanonical DNA methylation variation. Nature. 2015; 523(7559):212–6. https://doi.org/10.1038/nature14465. Elshire RJ, Glaubitz JC, Sun Q, Poland JA, Kawamoto K, Buckler ES, Mitchell SE. A robust, simple genotyping-by-sequencing (GBS) approach for high diversity species. PLOS ONE. 2011; 6(5):19379. https://doi.org/10.1371/journal.pone.0019379. Benjamini Y, Yekutieli D. The control of the false discovery rate in multiple testing under dependency. Ann Stat. 2001; 29(4):1165–88. https://doi.org/10.1214/aos/1013699998. Shahryary Y, Johannes F, Hazarika R. jlab-code/AlphaBeta. 2020. https://doi.org/10.5281/zenodo.3992612. Shahryary Y, Johannes F, Hazarika R. Bioconductor AlphaBeta Software Package. 2020. https://doi.org/10.18129/B9.bioc.AlphaBeta. Schmitz RJ. AlphaBeta: Computational inference of epimutation rates and spectra from high-throughput DNA methylation data in plants. GSE153055. 2020. https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ncbi.nlm.nih.gov_geo_ query_acc.cgi-3Facc-3DGSE153055&d=DwIGaQ&c=vh6FgFnduejNhPPD0fl_yRaSfZy8CWbWnIf4XJhSqx8&r=Z3BY_DFGt24T_Oe13xHJ2wIDudwzO_8VrOFSUQlQ_zsz-DGcYuoJS3jWWxMQECLm&m= nMao27rggwqbBJbvu1-d0yavK1ZEszYRhgNn0-mmx8g&s=HsUT2FBGvJLvyqtcALnMlH07FzdJt3Uw2EtloId06B0&e=. We thank Kay Schneitz for discussing plant development with us, Cristina Cipriani for early tests of the optimX package, and Keith Slotkin for the sRNA data. The review history is available as Additional file 7. FJ, RJS, YS, RRH, and TM acknowledge support from the Technical University of Munich-Institute for Advanced Study funded by the German Excellent Initiative and the European Seventh Framework Programme under grant agreement no. 291763. RJS acknowledges the support from the National Science Foundation (IOS-1546867). RJS is a Pew Scholar in the Biomedical Sciences, supported by the Pew Charitable Trusts. FJ and YS were also supported by the SFB Sonderforschungsbereich924 of the Deutsche Forschungsgemeinschaft (DFG). Open Access funding enabled and organized by Projekt DEAL. Technical University of Munich, Department of Plant Sciences, Liesel-Beckmann-Str. 2, Freising, 85354, Germany Yadollah Shahryary, Aikaterini Symeonidi, Rashmi R. Hazarika, Talha Mubeen & Frank Johannes Technical University of Munich, Institute for Advanced Study, Lichtenbergstr. 2a, Garching, 85748, Germany Yadollah Shahryary, Rashmi R. Hazarika, Talha Mubeen, Robert J. Schmitz & Frank Johannes Institute of Computational Biology, Helmholtz Zentrum München, Ingolstädter Landstr. 1, Neuherberg, 85764, Germany Johanna Denkena & Maria Colomé-Tatché Institute of Bioinformatics, 120 East Green Street, Athens, 30602, USA Brigitte Hofmeister European Research Institute for the Biology of Ageing, University of Groningen, University Medical Centre Groningen, A. Deusinglaan 1, Groningen, 9713 AV, Netherlands Maria Colomé-Tatché TUM School of Life Sciences Weihenstephan, Technical University of Munich, Emil-Erlenmeyer-Forum 2, Freising, 85354, Germany Netherlands Institute of Ecology (NIOO-KNAW), Department of Terrestrial Ecology, Wageningen, Wageningen, The Netherlands Thomas van Gurp & Koen J.F. Verhoeven The Center for Bioenergy Innovation, Oak Ridge National Laboratory, Oak Ridge, USA Gerald Tuskan Department of Genetics, The University of Georgia, 120 East Green Street, Athens, 30602, USA Robert J. Schmitz Yadollah Shahryary Aikaterini Symeonidi Rashmi R. Hazarika Johanna Denkena Talha Mubeen Thomas van Gurp Koen J.F. Verhoeven Frank Johannes FJ and MCT conceptualized the method. FJ, YS, and RRH implemented and documented the method. FJ, YS, AS, RRH, JD, TM, BTH, and TvG analyzed the data. KV, GT, and RJS contributed materials. FJ wrote the paper with input from all coauthors. The authors read and approved the final manuscript. Correspondence to Robert J. Schmitz or Frank Johannes. Table S1. WGBS information for MA pedigrees MA1_1, MA1_3 and MA3. Table S2. Epimutation rate estimates and model selection results for pedigree MA1_1. Table S4. Epimutation rate estimates and model selection results for pedigree MA3. Table S5. Pre-processing of WGBS data using MethylStar increases the number of high-confident cytosines that can be used for epimutation analysis compared with previous pre-processing approaches. Figure S1. Developmental origin of somatic epimutations in plants. Review history. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Shahryary, Y., Symeonidi, A., Hazarika, R.R. et al. AlphaBeta: computational inference of epimutation rates and spectra from high-throughput DNA methylation data in plants. Genome Biol 21, 260 (2020). https://doi.org/10.1186/s13059-020-02161-6 Epimutation Epimutation rate Molecular clock Epigenetic clock Bioinformatics software tool R/Bioconductor package
CommonCrawl
A Detailed Introduction to RSA Cryptography Kristian Mcdonald | Mon 27 August 2018 Updated on Mon 27 August 2018 Estimated read time: 42 min Basic Overview of RSA Background Mathematics Bézout's Identity Elements in \(\mathrm{Z_n^\times}\) \(\mathrm{Z_n^\times}\) is a Group Properties of the Totient Function An Interim Result Fermat's Little Theorem A Version of the Chinese Remainder Theorem RSA Cryptography Message Encryption and Decryption Verifying that RSA Encryption/Decryption Works Multiplicative Homomorphicity in RSA Why Does RSA Work? Why Is RSA Constructed the Way It Is? Generalising RSA: Are More Primes Better? Whether in the form of our basic communications, our banking transactions, or even driving our cars, the transmission of digital information has a significant impact on the way we live our lives. As more information from our daily life is digitised, access to secure means of privately transmitting information becomes increasingly important. In effect, the commercialisation of our digital footprint has turned information into a kind of currency - and if information is a currency, privacy is the difference between keeping your money in the bank versus freely giving it away. The history of cryptography has involved a relatively small number of major breakthroughs as it turns out that robust cryptographic techniques that may realistically be implemented are not easy to discover. Thus, when breakthroughs happen, they garner a lot of attention. Though RSA is now considered an older technique and is often replaced by newer methods using elliptical curves, It's fair to say that the discovery of RSA cryptography ranks among the most important breakthroughs in cryptography. Whether you're interested in how blockchains function, the general history of cryptography, or even the practical utility of seemingly esoteric number theoretic results, learning how RSA cryptography works is a useful exercise that will expand your toolkit and improve your understanding of the modern world. The purpose of this article is to provide a relatively self-complete presentation of the inner workings of RSA. It's hoped that readers interested in RSA can make their way through the article and emerge confident that they understand how RSA works. The proof that RSA cryptography is valid relies on multiple interesting mathematical results. This article aims to collect the necessary background mathematics in one place, including explicit proofs for key results, to provide readers with a self-complete account of RSA. A reader hoping to understand RSA without delving into mathematical details may find the article overly complex, though may still benefit from reading the summary content. Similarly, an expert in number theory will likely find the details in some proofs "too comprehensive". However, readers who lack expertise in these matters, but are willing to ``get their feet dirty,'' will hopefully benefit from the explicit details provided in the article and the self-complete nature of the presentation. Readers who would prefer to minimise their exposure to technical details can find other elementary introductions to RSA elsewhere online. There are also a number of good technical articles online, though often a reader must trust the results of theorems that are not explicitly (or incompletely) proven within the article and/or search further afield to fill in details. Perusing a number of online articles, it appeared that many were insufficient for a reader who sought to understand every detail regarding how RSA works. This observation formed the motivation for the present article. RSA cryptography was the first1 viable implementation of what is known as public key cryptography. Public key cryptography is both a powerful and useful form of cryptography that allows anyone to send an encrypted message to another individual. The basic idea is that an individual who wants to send a private message must first encrypt the message with the receiver's public key. An individual's public key may be openly broadcast to any interested party, allowing anyone to send an encrypted message. Importantly, knowledge of the public key does not compromise the security of encrypted messages. Upon receipt of an encrypted message, the receiver uses their private key to decrypt the message. Provided the receiver doe not share their private key with anyone else, they alone possess the ability to easily decrypt messages encrypted with their public key. Attackers seeking to decrypt a sender's message must use brute force to try and "crack the code". RSA cryptography relies on a number of parameters, including the length of the keys. For appropriately chosen parameters, it is technologically infeasible to implement a successful brute force attack on an encrypted message. Consequently an attacker is highly unlikely to access the content of an encrypted message. The mathematics that underlines RSA encryption and decryption is described in detail in subsequent sections of this article. Here, we merely note that a user's public key is specified by a pair of integers \((e,n)\), while their private key is specified by a related pair \((d,n)\). To encrypt a message, a sender first converts the message into a numerical form (call this \(M\)), and subsequently transforms the message into a cipher using the following relationship: $$\begin{eqnarray} C\ =\ M^e~(\mathrm{mod}~n). \end{eqnarray}$$ The cipher \(C\) is the encrypted form of the message and is sent to the message receiver. Upon receipt of the cipher, the receiver decrypts the message with their private key via $$\begin{eqnarray} M\ =\ C^d~(\mathrm{mod}~n). \end{eqnarray}$$ The remarkable feature here is that the sender does not send the entire manipulated message (namely \(M^e\)), but rather only sends the cipher \(C\), which is merely a remainder, obtained by dividing \(M^e\) by \(n\). Despite the evident loss of information entailed by only sending the remainder, the receiver still possesses sufficient information to reconstruct the entire message. This seemingly miraculous feature of RSA cryptography is the result of underlying mathematical constructs that are both profoundly powerful and relatively simple. Explaining this mathematics in detail is the purpose of this article. We begin by covering the background maths necessary to understand RSA. We start with some formal definitions. Note that it's not necessary to understand every definition provided here, though doing so will provide context for the mathematical tools that underlie RSA. A ring is a set \(R\) with two operations called addition and multiplication. A ring is said to commutative if the multiplication operation is commutative, namely $$\begin{eqnarray} a\times b = b\times a, \quad \forall\; a, b\in R. \end{eqnarray}$$ A field is a commutative ring for which every nonzero element possesses a multiplicative inverse: $$\begin{eqnarray} \forall a\in R,\ \exists\ b\in R, \ \ \mathrm{such\ that}\ \ a\times b =1. \end{eqnarray}$$ More precisely, a field is a set of numbers with four operations (addition, subtraction, multiplication and division) which satisfy a set of arithmetic rules called the field axioms. 2 A finite field is simply a field with a finite number of elements. Denote the set of integers less than \(n\) as \(Z_n=\{0,1,2,\dots,n-1\}\). Addition and multiplication operations can be defined on this set using modular arithmetic, namely $$ \begin{eqnarray} a&=& b~(\mathrm{mod}~n) \quad\Longrightarrow \quad a = m_a n +b, \quad\mathrm{for~integers}~m_a~\mathrm{and}~b<n, \end{eqnarray}$$ where \(b\) is the remainder after dividing \(a\) by \(n\). Relationships such as \(a=b~(\mathrm{mod}~n)\) are referred to as congruence relationships, and for \(a=b~(\mathrm{mod}~n)\), we say that \(a\) is congruent to \(b\), meaning they share the same remainder when divided by \(n\). Addition and multiplication for \(Z_n\) take the usual modular form. For example, in \(Z_6\) one has \(3\times 4 = 0~(\mathrm{mod}~6)\) and \(3+5=2~(\mathrm{mod}~6)\). More generally, if \(a=b~(\mathrm{mod}~n)\), one has $$ \begin{eqnarray} a+c&=& b+c~(\mathrm{mod}~n),\nonumber\\ a\times c&=& b\times c~(\mathrm{mod}~n),\nonumber\\ a^c&=& b^c~(\mathrm{mod}~n). \end{eqnarray}$$ To prove these results, we write \(a = m_an+b\), so that $$\begin{eqnarray} a+c&=& (m_a n + b) + c \ =\ b+c~(\mathrm{mod}~n), \end{eqnarray}$$ since \(m_a n =0~(\mathrm{mod}~ n)\). Similarly, multiplying \(a\) and \(c\) gives $$\begin{eqnarray} a\times c\ =\ (m_a\,c)\times n + b\times c\ =\ b\times c ~(\mathrm{mod}~n), \end{eqnarray}$$ whilst raising \(a\) to the power of \(c\) gives $$\begin{eqnarray} a^c\ =\ (m_a n +b)^c\ =\ [b^c+\mathcal{F}(n)]\ =\ b^c~(\mathrm{mod}~n), \end{eqnarray}$$ where all terms in the function \(\mathcal{F}(n)\) contain the integer \(n\), giving \(\mathcal{F}(n)=0~(\mathrm{mod}~n)\). Note that we merely expanded the brackets and lumped all terms containing \(n\) together into an arbitrary function called \(\mathcal{F}(n)\). Another useful result is the ability to cancel factors from congruence relationships when they are co-prime with the modulus. Specifically, if \(k\) and \(n\) are co-prime (namely their greatest common divisor is one, \(\mathrm{gcd}(k,n)=1\)), and $$\begin{eqnarray} k\, a \ =\ k\, b~(\mathrm{mod}~n),\label{eq:cancel_factor} \end{eqnarray}$$ then \(a\) and \(b\) are congruent, meaning \(a=b~(\mathrm{mod}~n)\). To prove this result, note that if \(\mathrm{gcd}(k,n)=1\), there always exists an integer \(x\), which is the multiplicative inverse of \(k\) modulo \(n\) (this result is proven below), $$\begin{eqnarray} \mathrm{gcd}(k,n)\ =\ 1\quad \Longrightarrow \quad \exists \,x, \ ~\mathrm{such\ that}~ \ x\times k\ =\ 1~(\mathrm{mod}~n). \end{eqnarray}$$ For completeness, note that the statement \(xk=1~(\mathrm{mod}~n)\) means we may write \(xk\) as \(xk = m_kn+1\), for integer \(m_k\). Multiplying Eq. \eqref{eq:cancel_factor} by \(x\) gives: $$\begin{eqnarray} x\,k\,a\ =\ x\,k\,b~(\mathrm{mod}~n). \end{eqnarray}$$ Consider the left hand side of this expression: $$\begin{eqnarray} x\,k\,a\ =\ (m_k n+1)\times a\ =\ a~(\mathrm{mod}~n.) \end{eqnarray}$$ Similarly, one can show that \(xkb=b~(\mathrm{mod}~n)\). Putting these results together gives the \(a=b~(\mathrm{mod}~n)\), as required. This demonstrates that a common factor can be cancelled from a congruence relationship if the factor is co-prime with the modulus. The set of integers modulo \(n\) is a commutative ring, which we also denote as \(Z_n\).3 For every element \(a\in Z_n\), the element \((n-a)\in Z_n\) satisfies $$\begin{eqnarray} a+(n-a) &=& 0~(\mathrm{mod}~n), \end{eqnarray}$$ and is therefore the additive inverse of \(a\). For any element \(a\in Z_n\), the multiplicative inverse of \(a\) is the element \(b=a^{-1}\), which satisfies \(a\times b =1~(\mathrm{mod}~n)\). In general, however, a commutative ring \(Z_n\) may contain elements for which no multiplicative inverse exists. For example, in \(Z_8\) there is no multiplicative inverse for the element 4. Note also that the product \(4\times 4=0~(\mathrm{mod}~8)\). These two properties are related: If there exists a non-zero element \(b\in Z_n\), such that \(a\times b=0~(\mathrm{mod}~n)\), the element \(a\) does not contain a multiplicative inverse in \(Z_n\) (more formally, zero-divisors in \(Z_n\) do not possess multiplicative inverses in \(Z_n\)). For the set \(Z_n\), it is always possible to define a subset \(Z_n^\times\) such that every element \(a\in Z_n^\times\) possesses a multiplicative inverse in \(Z_n^\times\). Formally, we define \(Z_n^\times\) as $$\begin{eqnarray} Z_n^\times&\equiv & \{a\in Z_n~|~\exists\, b\in Z_n,~\mathrm{such~that}~a\times b=1~(\mathrm{mod}~n)\}. \end{eqnarray}$$ When does an element in \(Z_n\) possess a multiplicative inverse in \(Z_n\)? It turns out that if \(a\in Z_n\) is co-prime with \(n\), i.e. \(\mathrm{gcd}(a,n)=1\), then \(a\) will have a multiplicative inverse in \(Z_n\). The proof of this statement uses Bézout's identity, which we now prove. Bézout's identity asserts that, given two integers \(a\) and \(n\), with greatest common divisor \(d\), namely \(\mathrm{gcd}(a,n)=d\), one can always find integers \(m_a\) and \(m_n\) satisfying $$\begin{eqnarray} m_a a+m_n n =d. \end{eqnarray}$$ The standard proof of Bézout's identity proceeds as follows. For any two integers \(a\) and \(n\), define the following set of integers: $$\begin{eqnarray} \mathcal{S} &=& \{m_a a +m_n n ~|~ m_{a},\,m_{n}\in Z,~\mathrm{and}~m_a a+m_n n >0\}. \end{eqnarray}$$ This non-empty set is comprised solely of positive integers and therefore contains a minimum element, which can be denoted as \(d=m_{d,a} a+ m_{d,n}n\), for integers \(m_{d,a}\) and \(m_{d,n}\). One can prove that \(d\) is a divisor of \(a\) as follows. Write \(a= n_a d+r_a\), for integer \(n_a\) and remainder \(r_a\). Rearranging this expression gives: $$\begin{eqnarray} r_a&=& a-n_a d\ =\ (1-n_a m_{d,a})\times a - (n_am_{d,a})\times n, \end{eqnarray}$$ which shows that either \(r_a\in \mathcal{S}\) or \(r_a=0\). However, by definition, the remainder \(r_a\) satisfies \(0\le r_a<d\), and, furthermore, \(d\) is the smallest element in \(\mathcal{S}\). Hence \(r_a=0\), and \(d\) is a divisor of \(a\). A similar proof demonstrates that \(d\) is also a divisor of \(n\). To show that \(d\) is the greatest common divisor of \(a\) and \(n\), assume that there exists an integer \(d'\) which is also a common divisor of \(a\) and \(n\), such that \(a=m_a' d'\) and \(n = m_n' d'\). Using the expression for \(d\), one has $$\begin{eqnarray} d&=& m_{d,a}a + m_{d,n}n \ =\ (m_{d,a} m_a' + m_{d,n} m_n')\times d', \end{eqnarray}$$ demonstrating that \(d\) is divisible by \(d'\), so that \(d\ge d'\). Hence \(\mathrm{gcd}(a,n)=d\), as anticipated, and Bézout's identity follows. Next, we make use of Bézout's identity to show that \(Z_n^\times\) is comprised of elements from \(Z_n\) satisfying \(\mathrm{gcd}(a,n)=1\). Consider any element \(a\in Z_n\) that is co-prime to \(n\) (i.e. \(\mathrm{gcd}(a,n)=1\)). According to Bézout's identity, there exists integers \(x\) and \(y\) that satisfy \(ax+ny =1\). Rearranging gives \(ax = -ny +1\), which can be written as $$\begin{eqnarray} a\times x&=& 1~(\mathrm{mod}~n). \end{eqnarray}$$ Thus, any element of \(Z_n\), that is co-prime to \(n\), possesses a multiplicative inverse (modular \(n\)) in the set \(Z_n\). However, \(Z_n^\times\) was defined as the subset of elements of \(Z_n\) that contain a multiplicative inverse in \(Z_n\). Hence, \(Z_n^\times\) is comprised of the elements \(a\in Z_n\) that satisfy \(\mathrm{gcd}(a,n)=1\). The set \(Z_n^\times\) is well behaved under multiplication and, in particular, every element \(a\in Z_n^\times\) has a modular multiplicative inverse \(a^{-1}\in Z_n^\times\). Furthermore, the elements in \(Z_n^\times\) always define a group (as discussed momentarily), where group multiplication is identified as the standard integer multiplication modulo \(n\). Note that the set \(Z_n\) does not always define a group under multiplication modulo \(n\), for arbitrary \(n\).4 In cases where \(n=p\) is a prime number, one has \(\mathrm{gcd}(a,p)=1\) for all non-zero \(a\in Z_p\), and \(Z_p^\times = Z_p/\{0\}\). That is, \(Z_p^\times\) contains all non-zero elements of \(Z_p\), as all integers less than a prime number \(p\) are co-prime with \(p\). More generally, for non-prime \(n\) one has \(Z_n^\times \ne Z_n/ \{0\}\), and the number of elements in \(Z_n^\times\) is equal to the number of integers less than \(n\) that are co-prime with \(n\). However, note that Euler's totient function, \(\phi(n)\), is defined as the number of integers less than \(n\) that are co-prime with \(n\). Consequently the number of elements in \(Z_n^\times\) (called the order of \(Z_n^\times\)) is always given by \(|Z_n^\times| = \phi(n)\). It is straight-forward to show that the elements of \(Z_n^\times\) satisfy the four conditions necessary to define a group: Group Associativity: Given any three elements \(a,\,b,\,c\in Z_n^\times\), one trivially has $$ \begin{eqnarray} a\times (b\times c) ~(\mathrm{mod}~n)=(a\times b)\times c~(\mathrm{mod}~n). \end{eqnarray}$$ Group Inverse: All elements in \(a\in Z_n^\times\) satisfy \(\mathrm{gcd}(a,n)=1\), and therefore have a modular multiplicative inverse in \(Z_n^\times\), by Bézout's identity. Group Closure: For any two elements \(a,\,b\in Z_n^\times\), one has \(\mathrm{gcd}(b,n)=\mathrm{gcd}(a,n)=1\), and Bézout's identity asserts that one can write $$ \begin{eqnarray} x_aa+y_a n\ =\ 1~\qquad\mathrm{and}~\qquad x_bb+y_bn\ =\ 1, \end{eqnarray}$$ for some integers \(x_{a,b}\) and \(y_{a,b}\). Multiplying these expressions gives $$ \begin{eqnarray} ab(x_ax_b) + n(x_a y_ba + x_b y_a b +y_ay_b n)=1, \end{eqnarray}$$ which, in accordance with Bézout's identity, implies that \(\mathrm{gcd}(ab,n)=1\). This demonstrates closure under group multiplication as \(ab\in Z_n^\times\). Group Identity: The set \(Z_n^\times\) always contains the element \(1\), which satisfies $$ \begin{eqnarray} 1\times a\ =\ a\times1\ =\ a \in Z_n^\times, \end{eqnarray}$$ for any element \(a\in Z_n^\times\). Thus, \(Z_n^\times\) forms a group. More precisely, \(Z_n^\times\) is an Abelian group as group multiplication is commutative. As noted earlier, every element \(a\in Z_n\) contains an additive inverse (modulo \(n\)), namely \((n-a)\in Z_n\), which satisfies \(a+(n-a)=0~(\mathrm{mod}~n)\). For any element \(a\in Z_n^\times\), the additive inverse from \(Z_n\) also appears in \(Z_n^\times\), namely \((n-a)\in Z_n^\times\). This is shown as follows. For all \(a\in Z_n^\times\), one has \(\mathrm{gcd}(a, n)=1\), and it is possible to write \(x_aa+ y_a n=1\), for integers \(x_a,\,y_a\). Rearranging this expression gives $$ \begin{eqnarray} x_aa+ y_a n\ =\ (-x_a)(-a) +y_an \ =\ (-x_a)(n-a) + (x_a+y_a)n\ =\ 1. \end{eqnarray}$$ Thus, \(\mathrm{gcd}((n-a),n)=1\), and \(\forall a\in Z_n^\times\), there exists an element \((n-a)\in Z_n^\times\), such that $$ \begin{eqnarray} a+(n-a)=0~(\mathrm{mod}~n). \end{eqnarray}$$ Note, however, that \( Z_n^\times\) does not contain a zero element, so addition is not well defined within \(Z_n^\times\) itself. This raises an important point - whereas addition modulo \(n\) is always well-defined for \(Z_n\) but multiplication modulo \(n\) is not (due to the presence of zero divisors), the converse is true for \(Z_n^\times\), where multiplication is well- defined but addition is not. Euler's totient function plays a role in the discussion of RSA below. It is useful to note the following properties of the totient function: For prime \(p\), one has \(\phi(p)=p-1\). If \(a\) and \(b\) are co-prime, \(\phi(ab)=\phi(a)\phi(b)\). Thus, for prime numbers \(p\) and \(q\), one has \(\phi(pq)=\phi(p)\phi(q)=(p-1)(q-1)\). The first statement follows from the definition of a prime number, as all integers less than \(p\) are co-prime with \(p\). The last two statements are proved as follows. First, consider a number \(N_2=p^2\), for some prime \(p\). We wish to determine the value of \(\phi(n_2)\). Note that there are \(p^2-1\) numbers to be considered as candidates that may be co-prime with \(N_2\) and thus counted by \(\phi(n_2)\). Of these \(p^2-1\) numbers, all will be co-prime with \(N_2\), unless the number is divisible by \(p\). There are \(p-1\) numbers less than \(N_2\) that are divisible by \(p\). This gives $$\begin{eqnarray} \phi(p^2)&=& p^2-1 - (p-1)\ =\ p^2-p. \end{eqnarray}$$ Similar arguments hold for a number \(N_m=p^m\), for an arbitrary positive integer \(m\), giving $$\begin{eqnarray} \phi(p^m)&=& p^m-p^{m-1}. \end{eqnarray}$$ This gives the value of Euler's totient function for any number that may be written as \(N_m=p^m\), for prime number \(p\). Next, consider numbers of the form \(N_{m,n}= p^m q^n\), for prime numbers \(p\) and \(q\), and positive integers \(m\) and \(n\). There are \( p^mq^n-1\) numbers to consider as candidate co-primes to \(N_{m,n}\). Of these, we should not count the numbers that are divisible by \(p\), of which there are \(p^{(m-1)}q^n-1\). Similarly we should not count the \(p^mq^{(n-1)}-1\) numbers that are divisible by \(q\). However, moving these two groups of numbers double-counts the numbers that are divisible by \(pq\). Thus, we should add back the numbers that are divisible by \(pq\), of which there are \(p^{(m-1)}q^{(n-1)}-1\). Putting this altogether gives $$\begin{eqnarray} \phi(p^mq^n)&=& p^mq^n - p^mq^{n-1}- p^{m-1}q^n + p^{m-1}q^{n-1}\nonumber\\ &=& (p^m-p^{m-1})(q^n-q^{n-1})\nonumber\\ &=&\phi(p^m)\,\phi(q^n). \end{eqnarray}$$ This result generalises for an arbitrary number of the form $$\begin{eqnarray} N\ =\ p_i^{m_1}\times p_2^{m_2}\times \ldots \times p_n^{m_n}\ =\ \Pi_{i=1}^n \,p_i^{m_i},\label{eq:prime_decomp} \end{eqnarray}$$ for integers \(m_i\), and distinct primes \(p_i\), where \(i=1,2,..,n\). The generalisation is readily proven either using the same counting methods as above, or by induction. The resulting totient function is $$\begin{eqnarray} \phi(\Pi_{i=1}^n\, p_i^{m_i})&=& \Pi_{i=1}^n \left(p_i^{m_i}-p_i^{m_i-1}\right). \end{eqnarray}$$ This gives $$\begin{eqnarray} \phi(\Pi_i\, p_i^{m_i})&=& \Pi_i \,\phi(p_i^{m_i})\label{eq:totient_product_primes} \end{eqnarray}$$ These results are sufficient to prove the claim that \(\phi(ab)=\phi(a)\phi(b)\) for co-prime integers \(a\) and \(b\). The Fundamental Theorem of Arithmetic specifies that any integer may be written as a unique product of primes, as in Eq.~\eqref{eq:prime_decomp}. Further, any pair of co-prime integers can be written as \(a=\Pi_i\, p_i^{m_i}\), and \(b=\Pi_j\,q_j^{n_j}\), for some sets of primes \(\{p_i\}\) and \(\{q_j\}\), where the sets are disjoint, namely \(\{p_i\} \cap \{q_j\}= \emptyset\). Combining the above results gives $$\begin{eqnarray} \phi(ab)&=& \phi\left(\left[\Pi_{i}\,p_i^{m_i}\right]\left[\Pi_j\,q_j^{n_j}\right]\right)\nonumber\\ &=&\left[\Pi_{i}(p_i^{m_i}-p_i^{m_i-1})\right]\times\left[ \Pi_j(q_j^{n_j}-q_j^{n_j-1})\right]\nonumber\\ &=&\phi(a)\,\phi(b), \end{eqnarray}$$ as required. Note that the second equality follows from Eq. \eqref{eq:totient_product_primes}. In this subsection we prove the following result: If \(a\) is co-prime with a prime number \(p\), then for each non-zero \(b\in Z_p\), there exists a unique \(x_b\in Z_p\) such that \(a x_b = b~(\mathrm{mod}~p)\). A proof of this result proceeds as follows. The integers \(a\) and \(p\) satisfy \(\mathrm{gcd}(a,p)=1\), so Bézout's identity asserts that one can find integers \(x_1\) and \(m_1\) such that \(ax_1+p m_1=1\). Furthermore, \(x_1\) is unique. To prove this, assume that \(y_1\) also satisfies \(a y_i=1~(\mathrm{mod}~p)\). It follows that \(ax_1 = ay_1 = 1~(\mathrm{mod}~p)\). Hence, $$\begin{eqnarray} x_1 \ =\ (ay_1) x_1\ =\ (ax_1) y_1\ =\ y_1~(\mathrm{mod}~p). \end{eqnarray}$$ Thus, \(y_1=x_1\) is the unique modular multiplicative inverse of \(a\) in \(Z_p\). This result implies that, for each non-zero \(b\in Z_p\), there exists a unique element \(x_b\in Z_p\), such that \(a x_b = b~(\mathrm{mod}~p)\). To prove this claim, multiply the expression \(a x_1 + p m_1=1\) by \(b\) and define \(x_b = b\times x_1\) and \(m_b= b\times m_1\), to obtain $$\begin{eqnarray} a x_b +p m_b = b. \end{eqnarray}$$ Consequently \(a x_b = b~(\mathrm{mod}~p)\), as required. The uniqueness of \(x_b\) follows from the uniqueness of \(x_1\). It follows that, for each non-zero element \(b\in Z_p\), the product \(b\times a\) is unique (modulo~\(p\)). Thus, there is a one-to-one correspondence between the non-zero elements of \(Z_p\), $$\begin{eqnarray} Z_p/\{0\}\ =\ \{1,\,2,\,3,\,\ldots,\,(p-1)\}, \end{eqnarray}$$ and the set of numbers $$\begin{eqnarray} A_p\ =\ \{ a,\, 2a,\, 3a,\,\ldots,\, (p-1)\,a\}. \label{eq:set_Ap} \end{eqnarray}$$ This result is used in the proof of Fermat's Little theorem. Fermat's Little theorem states that, given any non-zero integer \(a\in Z_p\), for prime number \(p\), one has \(a^{\phi(p)}=1~(\mathrm{mod}~p)\). To prove this theorem, recall the one-to-one correspondence between the non-zero elements of \(Z_p\) and the set of numbers \(A_p\) given in Eq. \eqref{eq:set_Ap}. Taking the product of all elements in the set \(A_p\), it follows that $$\begin{eqnarray} a\times 2a\times \ldots\times (p-1)a = (p-1)!~(\mathrm{mod}~p), \end{eqnarray}$$ which can be written as $$\begin{eqnarray} a^{(p-1)} (p-1)! = (p-1)!~(\mathrm{mod}~p). \end{eqnarray}$$ The factorial factor can be cancelled from each side of this congruence expression as \(p\) and \((p-1)!\) are co-prime. Thus, one has \(a^{(p-1)} = 1~(\mathrm{mod}~p)\), or, equivalently, \(a^{\phi(p)}=1~(\mathrm{mod}~p)\), as anticipated. Given two co-prime integers \(p\) and \(q\), and an integer \(x\) that satisfies \(x=a~(\mathrm{mod}~p)\) and \(x= a~(\mathrm{mod}~q)\), one can show that \(x= a~(\mathrm{mod}~pq)\). To prove this statement, assume that \(x=b~(\mathrm{mod}~pq)\) for some \(b\). We will show that \(b=a\). By definition, one has the following relationships $$\begin{eqnarray} x \ =\ n_{pq}\, (pq) +b,~\quad x\ =\ n_p\, p +a, ~\quad\mathrm{and}\quad x\ =\ n_q\,q +a. \end{eqnarray}$$ Combining the first two expressions, one obtains $$\begin{eqnarray} b\ =\ a +(n_{pq}\,q+n_p)\,p, \end{eqnarray}$$ which shows that \(b=a~(\mathrm{mod}~p)\). One may similarly show that \(b=a~(\mathrm{mod}~q)\). Equating these expressions gives $$\begin{eqnarray} b\ =\ a + m_p\, p\ =\ a +m_q\, q, \end{eqnarray}$$ for some integers \(m_p\) and \(m_q\). Therefore the number \(X\equiv m_p\,p=m_q\, q\) is divisible by both \(p\) and \(q\). However, \(p\) and \(q\) are co-prime, so that \(X\ge pq\), which contradicts the assertion that \(0\le b< pq\) is the remainder obtained after dividing \(x\) by \(pq\). Hence, \(m_p=m_q=0\) and \(b=a\), giving \(x= a~(\mathrm{mod}~pq)\), as claimed. We now have all the necessary background mathematics to prove the validity of RSA cryptography - if you made it this far, well done! Our discussion of RSA is broken up into a number of parts. As mentioned earlier, RSA encryption and decryption relies on a set of public and private keys. In the first section below, we describe the generation of RSA keys. Subsequently we turn our attention to the RSA encryption and decryption procedures, and then mathematically prove that RSA cryptography is valid. Additional useful results and discussion are then presented. Firstly, it's shown that RSA cryptography is multiplicatively holomorphic. We then discuss why RSA works and why the algorithm is constructed the way it is. Finally we prove that a generalisation of RSA cryptography, involving more prime numbers (multi-prime RSA), also provides a valid cryptographic protocol. Any individual wishing to send and receive messages encoded via RSA encryption must generate a pair of keys, namely a public key and a private key. The public key may be freely shared with other individuals who may, in turn, use the key to encode messages. Once a message is encrypted with the public key, only the holder of the corresponding private key can (feasibly) decrypt the message. Thus, the public and private keys play a central role in RSA encryption. Here we describe how these keys are generated. The algorithm for generating RSA encryption keys proceeds as follows. Step 1: Randomly select two (large) prime numbers, \(p\) and \(q\). Step 2: Calculate the modulus \(n=p\times q\). Step 3: Calculate the number of integers less than \(n\) that are co-prime with \(n\), which is equivalent to calculating the value of Euler's totient function: $$\begin{eqnarray} \phi(n) \ =\ \phi(pq)\ =\ \phi(p)\,\phi(q)\ =\ (p-1)(q-1). \end{eqnarray}$$ Step 4: Select a large integer \(e\) such that \(e\in[2,\phi(n))\) and \(e\) is co-prime with \(\phi(n)\) Step 5: Compute the modular multiplicative inverse of \(e\), namely the integer \(d\in[2,\phi(n))\), that satisfies $$\begin{eqnarray} e\times d\ =\ 1~(\mathrm{mod}~\phi(n)).\label{eq:d_defined} \end{eqnarray}$$ The integer \(d\) is unique and, furthermore, \(d\) is co-prime with \(\phi\). The RSA public key is given by the pair of numbers \((e,n)\), while the pair \((d,n)\) constitutes the private key. An individual may freely share the public key with others but the private key should kept secret. In addition, the numbers \(p\) and \(q\) should not be revealed as knowledge of these primes allows an arbitrary individual to decrypt RSA encrypted messages. In Step 5, the uniqueness of \(d\) follows from the earlier proof that the multiplicative modular inverse of a number \(a\), is unique when \(a\) is co-prime with the modulus \(p\). By construction, \(e\) and \(\phi(n)\) are co-prime, meaning the multiplicative modular inverse \(d\) is unique. To see that \(d\) and \(\phi(n)\) are co-prime, let \(s=\mathrm{gcd}(d,\phi(n))\), such that \(d= c_d s\) and \(\phi(n)=c_\phi s\) for some integers \(c_s\) and \(c_\phi\). Using Eq.~\eqref{eq:d_defined} we can write \(e\times d = c_{ed}\,\phi(n)+1\), for an integer \(c_{ed}\). Combining these expressions gives $$\begin{eqnarray} e \times (c_d s)\ =\ c_{ed}\,c_\phi\,s +1, \end{eqnarray}$$ which can be cast as $$\begin{eqnarray} s\times (e c_d- c_{ed}\,c_\phi)\ =\ 1. \end{eqnarray}$$ This last expression implies that \(s=1\) (all the numbers in brackets are integers), verifying that \(\mathrm{gcd}(d,\phi(n))=1\), such that \(d\) and \(\phi(n)\) are co-prime, as claimed. Consider two individuals, Alice and Bob. Let Bob declare that his public key is \((e,n)\). Alice decides that she wishes to send a message \(M\) to Bob. For the moment, assume that \(M< n\) is an integer (we shall discuss arbitrary messages below). Alice converts the message \(M\) to the cipher \(C\) as follows: $$\begin{eqnarray} C\ =\ M^{e}~(\mathrm{mod}~n).\label{eq:encryptionRSA} \end{eqnarray}$$ Upon receiving the encrypted message, Bob uses his private key \((d,n)\) to decrypt the cipher \(C\) and obtain Alice's message by computing $$\begin{eqnarray} M\ =\ C^{d}~(\mathrm{mod}~n). \end{eqnarray}$$ Note that any individual intending to receive RSA encrypted messages requires a set of keys - their own public key, which allows other individuals to encrypt messages, and the corresponding private key, used to decrypt messages. Thus, in the above, it would be appropriate to label Bob's keys as \((e_b, n_b)\) and \((d_b,n_b)\). To send a reply to Alice, Bob must use Alice's public key \((e_a,n_a)\) to encrypt his reply, creating a new cipher that can only be decrypted by Alice, using her private key \((d_a,n_a)\). How do we know that the RSA algorithm works? Can we be sure that Bob does indeed return Alice's message \(M\) after calculating \(C^{d}~(\mathrm{mod}~n)\)? To prove that the Algorithm works, one must show that \(M=C^d~(\mathrm{mod}~n)\). Recall that the Chinese Remainder Theorem tells us that if \(x=a~(\mathrm{mod}~p)\) and \(x=b~(\mathrm{mod}~q)\), then \(x=a~(\mathrm{mod}~pq)\). Thus, to prove that RSA works, it suffices to prove that \(M=C^d~(\mathrm{mod}~p)\) and \(M=C^d~(\mathrm{mod}~q)\), as the result \(M=C^d~(\mathrm{mod}~pq)\) automatically follows. First consider \(M=C^d~(\mathrm{mod}~q)\). Equation \eqref{eq:encryptionRSA} implies that the cipher \(C\) and the message \(M\) are related as follows $$\begin{eqnarray} M^e\ =\ m\,n+ C, \end{eqnarray}$$ where \(m\) is an integer. Using this result, one may write $$\begin{eqnarray} C^d~(\mathrm{mod}~q)&=& (M^e-mn)^d~(\mathrm{mod}~q)\nonumber\\ &=& M^{ed}~(\mathrm{mod}~q), \end{eqnarray}$$ as \(n\) is divisible by \(q\). By definition, \(d\) is the modular multiplicative inverse of \(e\), which satisfies $$\begin{eqnarray} ed=1~(\mathrm{mod}~\phi(n)), \end{eqnarray}$$ so we can always write $$\begin{eqnarray} ed\ =\ s\, (p-1)(q-1) +1, \end{eqnarray}$$ for integer \(s\). Using this expression gives $$\begin{eqnarray} M^{ed}~(\mathrm{mod}~q)&=& M^{s(p-1)(q-1)+1}~(\mathrm{mod}~q)\nonumber\\ &=& M\times \left(M^{(q-1)}\right)^{s(p-1)}~(\mathrm{mod}~q).\label{eq:rsa_proof1} \end{eqnarray}$$ Next, we apply Fermat's Little Theorem, \(a^{(q-1)}=1~(\mathrm{mod}~q)\), for \(a=M\), to obtain $$\begin{eqnarray} M^{(q-1)}\ =\ m_q q+1, \end{eqnarray}$$ for integer \(m_q\). In turn, this result is used to simplify Eq. \eqref{eq:rsa_proof1} as follows $$\begin{eqnarray} M\times\left(M^{(q-1)}\right)^{s(p-1)}~(\mathrm{mod}~q)&=& M\times\left(1+m_qq\right)^{s(p-1)}~(\mathrm{mod}~q)\nonumber\\ &=&\left[ M\times(1)^{s(p-1)} +\ldots\right]~(\mathrm{mod}~q)\nonumber\\ &=& M~(\mathrm{mod}~q).\label{eq:fermatLT_in_rsa_proof} \end{eqnarray}$$ The dots in the second line denote terms containing powers of \(m_qq\), which are always divisible by \(q\). Making use of Eq. \eqref{eq:fermatLT_in_rsa_proof} in Eq. \eqref{eq:rsa_proof1} finally gives $$ \begin{eqnarray} M^{ed}~(\mathrm{mod}~q) &=& M~(\mathrm{mod}~q), \end{eqnarray}$$ as required. The same procedure can be used to prove that \(M^{ed}~(\mathrm{mod}~p)=M~(\mathrm{mod}~p)\). Hence, via the Chinese remainder Theorem we obtain the desired result: $$\begin{eqnarray} C^d~(\mathrm{mod}~n)\ =\ M^{ed}~(\mathrm{mod}~n)\ =\ M~(\mathrm{mod}~n), \end{eqnarray}$$ proving that the RSA encryption/decryption procedure works - when Bob decrypts the cipher \(C\) he obtains Alice's message \(M\). The remarkable feature of RSA cryptography is that Alice need only send the remainder \(C\) to Bob, and yet Bob is able to reconstruct Alice's entire message, as can be mathematically proven in just a few lines! In the above we assumed that the original message was an integer \(M<n\). However, these result readily generalise for arbitrary messages. To encode an arbitrary message string, one first converts the string to a numerical form (for example, one can could crudely convert the string to a bit array, then convert the bit array to standard numeric form). If the resultant numerical message, \(M\), is larger than \(n\), one simply breaks the message up into discrete chunks \(M_i<n\), such that \(M=\sum _i M_i\). Each of the chunks can then be encrypted and sent to the message recipient, who may decrypt them. In this way, arbitrary messages may be encrypted, transmitted, and decrypted. Denote by \(E\) an encryption function that encodes a message \(M\) to generate a cipher \(C\), namely \(E(M)=C\). Similarly, let \(D\) denote the decryption function that returns the original message from the cipher, namely \(D(C)=D(E(M))=M\). An encryption protocl is said to be homomorphic if operations performed on the message \(M\) also apply to the cipher \(C\). For example, an encryption scheme is homomorphic under addition if the encryption of two messages \(M_1\) and \(M_2\) satisfies $$\begin{eqnarray} C_1+C_2\ =\ E(M_1)+E(M_2)\ =\ E(M_1+M_2)\ =\ C_{12}. \end{eqnarray}$$ Homomorphism is a powerful property as it allows individuals to perform operations on encrypted data sets without actually seeing the underlying data. Consequently one could encrypt multiple messages, send the ciphers to a receiver, who performs some operations on the ciphers to generate an output which is sent back and decrypted. The individual manipulating the data set never sees the actual data yet successfully performs operations of interest. Complete homomorphism under arbitrary mathematical operations is highly non-trivial and most realistic encryption schemes achieve partial homomorphism at best. It turns out that RSA encryption is homomorphic under multiplication, which is seen as follows. Consider two messages \(M_1\) and \(M_2\), which may be encrypted to produce two ciphers: $$\begin{eqnarray} C_i\ =\ M_i^e~(\mathrm{mod}~n)\qquad\mathrm{for}\qquad i=1,2.\label{eq:homo_ciphers} \end{eqnarray}$$ The product of these messages, \(M_{12}=M_1M_2\), may also be encrypted: $$\begin{eqnarray} C_{12}\ =\ (M_{1}M_2)^e~(\mathrm{mod}~n). \end{eqnarray}$$ To show that RSA encryption is multiplicatively homomorphic we must show that \(C_{12}=C_1C_2\). According to Eq. \eqref{eq:homo_ciphers} one has $$\begin{eqnarray} C_i\ =\ M_{i}^e-x_i n, \end{eqnarray}$$ for integers \(x_i\). Multiplying the ciphers gives $$\begin{eqnarray} C_1\times C_2&=& (M_1^e-x_1n) (M_2^e - x_2n)\nonumber\\ &=& M_1^eM_2^e +F(n), \end{eqnarray}$$ where the function \(F(n)\) is divisible by \(n\). Consequently we have $$ \begin{eqnarray} C_1C_2\ =\ M_1^e M_2^e~(\mathrm{mod}~n)\ =\ (M_1M_2)^e~(\mathrm{mod}~n)\ =\ C_{12}, \end{eqnarray}$$ proving that RSA encryption is multiplicatively homomorphic. The RSA algorithm is reliant upon a set of keys, \((e,n)\) and \((d,n)\), for each individual user. The public key \((e,n)\) may be widely disseminated, whereas \((d,n)\) should be kept secret. It may naively appear that it should be possible to calculate the private key, as \(n\) and \(e\) are public, and the sole unknown (\(d\)) is the multiplicative modular inverse of \(e\). These statements are true and, in principle, knowledge of \((e,n)\) can be used to determine the private key \(d\). However, the process is (believed to be) computationally "hard", making the implementation impractical to the point of being infeasible, provided the parameters are chosen appropriately. The difficulty of "cracking the RSA code" from the use of the primes \(p\) and \(q\) to calculate the modulus \(n\). Although an arbitrary individual may have access to the value of \(n\), they do not know the factors \(p\) and \(q\), and therefore cannot directly calculate the modulus \(\phi(n)\) via \(\phi(n)=(1-p)(1-q)\). Absent knowledge of \(\phi(n)\), an individual does not know the modulus with respect to which \(d\) is the multiplicative inverse of \(e\) - it's therefore difficult to calculate \(d\) as it's not clear where to start. To avoid using brute force to calculate \(\phi(n)\) directly, an individual must determine the prime factors \(p\) and \(q\). However, although it is easy to multiply two numbers together and obtain their product, it is believed to be computationally hard to find the factors. This is the secret to the success of the RSA algorithm - factoring numbers is computationally difficult, so provided very large prime numbers \(p\) and \(q\) are used to generate the modulus \(n=pq\), it is infeasible for others to extract \(p\) and \(q\) from \(n\) by brute force. The definition of "computationally hard" here is somewhat ambiguous as there is no proof demonstrating that it is in fact hard to factor large numbers - the discovery of a new algorithm could potentially render RSA ineffective. Also, the definition of "infeasible" is a function of time - as computational systems advance, our capacity to factor numbers by brute force improves. For example, it (famously) appeared unlikely to Ron Rivest that RSA-125 could be cracked in 1977 (Ron estimated it would take 40 quadrillion years!) but by 1993 a team using 1600 computers was able to crack a 426-bit message in 6 months. By 2009, RSA-768 (768-bits) was successfully factored after a two year effort. Thus, as technology advances, the definition of "large primes" must increase or RSA becomes ineffective. In addition, although factoring large numbers is a hard problem using classical computation techniques, a quantum computer would be able to rapidly accelerate the factorisation of large numbers. Therefore the successful construction of a quantum computer would, in effect, break RSA. It is interesting to think about how and why RSA works the way it does. At its core, RSA encrypts a message \(M\) by raising it to the power of \(e\). Decryption works by raising the encrypted message to the power of \(d\), which is the inverse of \(e\). If you made a first effort to construct an encryption scheme utilising just this property, you might try something crude like the following: To encrypt a message, one simply raises it to \(e\), giving \(C=M^e\). Decryption proceeds by applying the inverse of \(e\), giving \(C^d = M^{ed} = M\). This seems to provide a crude encryption scheme, right? There is a problem, of course, as an individual must know \(e\) to encrypt the message, which means \(e\) should be public. However, knowledge of \(e\) automatically allows one to calculate \(d=e^{-1}\) and decrypt the message. So a scheme like this cannot provide a viable public-key cryptographic system. A sensible next step, when attempting to develop an encryption scheme, would be to incorporate modular arithmetic. Modular mathematics introduces more parameters into the protocol (such as the modulus) which seems to complicate the procedure, yet has the desirous advantage of making it more difficult to decrypt messages. So lets combine the use of powers and modular arithmetic to encrypt our message \(M\). First we choose a modulus, \(n\), then express \(M^e\) in terms of the modulus: $$\begin{eqnarray} M^e\ = \ m n+C, \end{eqnarray}$$ for integer \(m\) and remainder \(C\). This is simply the statement \(C = M^e~(\mathrm{mod}~n)\). Note that \(C\) now contains (or hides) two inputs, namely the operation of raising \(M\) to the power of \(e\), and the division by \(n\). This appears promising and perhaps we could use \(C\) as a cipher. To encrypt a message, an individual now requires both \(e\) and \(n\), so the public key is the pair \((e,n)\). If an attacker knows \(n\) and \(e\), and they intercept the cipher \(C\), they still cannot determine the message \(M\), as the single equation above contains two unknowns, \(M\) and \(m\) - this seems promising. What about decryption? Clearly we need to "undo" the power of \(e\) used during encryption. The obvious option is to use the multiplicative inverse of \(e\), namely \(d\) such that \(d\times e =1~(\mathrm{mod}~n)\). Writing symbolically, the logic here is $$\begin{eqnarray} C^d~(\mathrm{mod}~n)\ \rightarrow\ M^{ed}~(\mathrm{mod}~n)\rightarrow\ M, \end{eqnarray}$$ where we assume \(M<n\). However, we again have a problem. If an attacker knows \(e\) and \(n\), as required to encrypt the message, they have enough information to calculate the inverse \(d\). Thus, an attacker may readily decrypt any cipher \(C\) and the scheme is useless. Now we have enough information to make the key breakthrough and arrive at RSA. We retain a public key \((e,n)\), such that encryption involves the use of both powers and modular arithmetic, but we want to choose an inverse \(d\) that is not readily accessible to attackers. The most obvious modification to the last scheme is to change the definition of the inverse \(d\) to \(d\times e=1~(\mathrm{mod}~\phi)\), where \(\phi\) is an as-yet undetermined integer. Lets look at what happens if we apply this scheme. First we encrypt our message \(M\) as \(C= M^e~(\mathrm{mod}~n)\), as before. To decrypt the message, we undo the power \(e\), $$\begin{eqnarray} C^d\ =\ [M^e-mn]^d\ =\ M^{ed}+ \mathcal{F}(n), \end{eqnarray}$$ where every term in \(\mathcal{F}(n)\) contains \(n\). We can therefore divide by \(n\) and obtain $$\begin{eqnarray} C^d\ =\ M^{ed}~(\mathrm{mod}~n)\ =\ M^{s\phi +1}~(\mathrm{mod}~n)i, \end{eqnarray}$$ where \(s\) is an integer. We see that we are almost there - if we choose \(\phi\) such that \(M^\phi=1~(\mathrm{mod}~n)\), we have a functioning encryption scheme. At this point, one needs knowledge of number theory to progress. So, drawing on your expertise in number theory (or, more likely, doing lots of reading) you may remember that Fermat's Little Theorem looks remarkably like what we're after. Fermat's Little theorem tells us that \(M^{(p-1)}=1~(\mathrm{mod}~p)\) for prime \(p\), which appears perfect! Lets choose \(\phi=(p-1)\) and restrict \(n=p\) to be a prime number. Unfortunately this is insufficient as we encounter the same problem as before - once an individual knows \(n=p\), they automatically know \(\phi(p)=(p-1)\) and can solve for the key \(d\), breaking the scheme. So this doesn't quite work, yet it appears very close to something viable. The trick (and key insight) is to somewhat decouple \(\phi\) from \(n\), so that knowledge of \(n\) does not automatically entail knowledge of \(\phi\) (and thus knowledge of \(d\)). We need \(n\) and \(\phi\) to be related, to benefit from Fermat's Little theorem, but we also need to use prime numbers and make it difficult to determine \(n\). Combining these properties, and guided by the knowledge that it is difficult to factorise numbers, a sensible first guess is to consider \(n=pq\) to be the product of two prime numbers. Remarkably, if \(\phi\equiv \phi(n)=(p-1)(q-1)\) is taken to be Euler's totient function, RSA cryptography follows. It takes some effort to prove that this scheme works (thus the earlier proofs) but nonetheless the resulting protocol is viable. The result is a cryptographic protocol with public key \((e,n)\), and private key \((d,n)\), which satisfy \(d\times e=1~(\mathrm{mod}~\phi(n))\). This scheme has the desirous properties of encrypting by applying both a power and modular arithmetic, whilst also permitting a decryption procedure that is incredibly difficult to break for an attacker. To top it all off, the proof of that the scheme works relies upon a number of interesting number theory results, generating unanticipated practical uses for results such as Fermat's Little theorem. No doubt Fermat would be both surprised and delighted! The time required to factor \(n\) by brute force increases with increasing \(n\). Consequently the use of a larger modulus \(n\) generates a more secure implementation of RSA than the use of a smaller modulus. Regarding the use of primes \(p\) and \(q\), some obvious question arise. Why use just two prime numbers \(p\) and \(q\)? Is it possible to generalise the scheme for more prime numbers? The answer to the latter question is a resounding yes - generalising the results proved earlier, it is simple to show that RSA cryptography can be generalised to the case where the modulus is the product of an arbitrary number of prime numbers, namely i $$\begin{eqnarray} N\ = \ \Pi_{i=1}^n p_i, \end{eqnarray}$$ where \(N\) denotes our generalised modulus. The procedure for generating encryption keys in the family of generalised RSA cryptography schemes is as follows. Step 1: Randomly select a set of \(n\) (large) prime numbers, \(p_i\), where \(i\in\{1,2,\ldots,n\}\) . Step 2: Calculate the modulus \(N=\Pi_{i=1}^n \,p_i\). Step 3: Calculate Euler's totient function: $$ \begin{eqnarray} \phi(N) \ =\ \phi(\Pi_i \,p_i)\ =\ \Pi_i\, \phi(p_i)\ =\ \Pi_i\,(p_i-1). \end{eqnarray}$$ Step 5: Compute the modular multiplicative inverse \(d\in[2,\phi(N))\), which satisfies $$ \begin{eqnarray} e\times d\ =\ 1~(\mathrm{mod}~\phi(N)).\label{eq:general_d_defined} \end{eqnarray}$$ The public key is again given by the pair \((e,N)\), and the private key is \((d,N)\). Message encryption and decryption proceeds exactly as in the standard RSA cryptography. A message \(M\) is encrypted as $$\begin{eqnarray} C\ =\ M^{e}~(\mathrm{mod}~N),\label{eq:General_encryptionRSa} \end{eqnarray}$$ and the cipher is decrypted via Using results reported already, it is easy to prove that this scheme works. First, using the Chinese Remainder Theorem, one can show that if \(x=a~(\mathrm{mod}~p_i)\,\, \forall i\), then \(x = a~(\mathrm{mod}~\Pi_i \, p_i).\) Specifically, we have already proven the case with two primes, \(p_1\) and \(p_2\), giving \(x = a~(\mathrm{mod}~p_1p_2)\). Next consider three primes, such that \(p_3\) is co-prime to \(N_{12}=p_1p_2\). For \(x=a~(\mathrm{mod}~p_3)\) and \(x=a~(\mathrm{mod}~p_1p_2)\), the Chinese Remainder Theorem gives \(x=a~(\mathrm{mod}~p_1p_2p_3)\). This process can be repeated to show that \(x=a~(\mathrm{mod}~N)\). Thus, to prove that the generalised RSA cryptography works, it is sufficient to show that $$\begin{eqnarray} C^d \ =\ M~(\mathrm{mod}~p_i)\, \quad \forall i. \end{eqnarray}$$ The proof is straightforward: $$\begin{eqnarray} C^d&=& M^{ed}~(\mathrm{mod}~p_i)\nonumber\\ &=& M^{\phi(N)}~(\mathrm{mod}~p_i)\nonumber\\ &=&M\times \left(M^{(p_i-1)}\right)^{s\Pi_{j\ne i}(p_j-1)}~(\mathrm{mod}~p_i)\nonumber\\ &=& M~(\mathrm{mod}~p_i), \end{eqnarray}$$ where \(s\) is an integer and the last line follows from Fermat's Little Theorem. By symmetry, a similar result holds for all \(p_i\), and by the Chinese Remainder Theorem the desired result of \(M=C^d~(\mathrm{mod}~N)\) follows. Hence, from the theoretical perspective the generalised RSA scheme with modulus \(N=\Pi_i p_i\) provides a viable cryptography. I was "playing around" when I first derived these results but a quick search reveals that the authors of the original RSA paper briefly mentioned this possibility. The generalised version has received some attention and, in fact, a couple of patents were filed to register the generalised RSA. This perhaps seems a little strange, given that the inventors of RSA cryptography mention the generalised scheme in their original paper. Nonetheless the existence of the patents has caused some caution regarding the use of the generalised scheme. There are also important practical considerations as the use of more primes does not necessarily make the procedure more secure. It appears that for sufficiently large \(N\), using \(n=3\) does not compromise the security of the system and has the practical benefit of simplifying the decryption calculations, via the generalised version of RSA-CRT.5 For larger values of \(n\), it may be simpler to factor the large number \(N\), though the details depend on the particular factoring algorithm used and the rate at which different algorithms become more powerful as technology advances (which varies for different algorithms). This article provided a detailed account of RSA cryptography. The presentation was largely complete and self-contained, though there are a few instances where, e.g., special cases in derivations were not discussed (these minor details are left as exercises for the reader). RSA cryptography was a trailblazing discovery that laid the path for modern public key cryptography. It was truly non- trivial to discover that RSA encryption allows anyone to send an encrypted message to another individual such that the receiver may recover the original message in its entirety despite only receiving a truncated (i.e. remainder) cipher \(C\). If you've made it this far, congratulations - it no doubt took some effort! Hopefully you've acquired an improved understanding of how RSA works and developed an appreciation for the powerful yet simple mathematics that underlies the scheme. Understanding RSA provides a good basis for further studies of newer techniques such as elliptic curve cryptography and zero-knowledge approaches like zk-SNARKs. More generally, a detailed study of RSA provides basic insights into how modern cryptographic systems work and highlights the utility of seemingly abstract areas of mathematics, such as number theory. [1] Rivest, R., Shamir, A., & Adleman, L. (1978). A Method for Obtaining Digital Signatures and Public-Key Cryptosystems. Communications of the ACM, 21(2), 120-126. DOI: 10.1145/359340.359342. The original discovery of RSA by Cocks, following ideas of Ellis, was classified and occurred within the GCHQ. By the time that Rivest, Shamir, and Adleman made the public discovery of RSA in 1977, Diffie-Hellman cryptography was already known, though both were publicly discovered after the earlier work of Ellis and Cocks. ↩ The field axioms specify the following properties: associativity, commutativity, additive and multiplicative identities, additive and multiplicative inverses, and distributivity of multiplication over addition. ↩ Note that the set of integers less than \(n\) is topologically equivalent to the set of integers modulo \(n\). Specifically, the integer elements of the former set are in a direct one-to-one correspondence to the equivalence classes that define the elements of the latter set. ↩ Though \(Z_n\) can be given a group structure by defining the group multiplication operation as integer addition modulo \(n\). ↩ RSA CRT is an implementation of RSA that uses the Chinese Remainder Theorem to accelerate the decryption process. The underlying protocol is identical aside from the use of mathematical identities to optimise message decryption. ↩ RSAnumber theoryFermat's Little Theoremmulti-prime RSAEuler's totient function
CommonCrawl
pp. B43-B49 •https://doi.org/10.1364/AO.439004 Computer-generated hologram manipulation and fast production with a focus on security application Vladimir Cviljušac, Antun Lovro Brkić, Blaž Sviličić, and Marko Čačić Vladimir Cviljušac,1,* Antun Lovro Brkić,2 Blaž Sviličić,1,3 and Marko Čačić4 1University of Zagreb, Faculty of Graphic Arts, Getaldićeva 2, 10000 Zagreb, Croatia 2Institute of Physics, Bijenička cesta 46, 10000 Zagreb, Croatia 3AKD d.o.o., Savska cesta 31, 10000 Zagreb, Croatia 4University North, Trg dr. Žarka Dolinara 1, 48000 Koprivnica, Croatia *Corresponding author: [email protected] Vladimir Cviljušac https://orcid.org/0000-0003-0700-0751 Antun Lovro Brkić https://orcid.org/0000-0002-0891-6246 V Cviljušac A Brkić B Sviličić M Čačić Vladimir Cviljušac, Antun Lovro Brkić, Blaž Sviličić, and Marko Čačić, "Computer-generated hologram manipulation and fast production with a focus on security application," Appl. Opt. 61, B43-B49 (2022) Utilizing standard high-resolution graphic computer-to-film process for computer-generated hologram... Vladimir Cviljušac, et al. Appl. Opt. 58(34) G143-G148 (2019) Binary computer-generated holograms for security applications from a synthetic double-exposure... Nobukazu Yoshikawa, et al. Fast calculation method based on hidden region continuity for computer-generated holograms with... Ryosuke Watanabe, et al. Appl. Opt. 61(5) B64-B76 (2022) Light wavelength Three dimensional imaging Original Manuscript: August 16, 2021 Revised Manuscript: October 13, 2021 Manuscript Accepted: October 18, 2021 Article Outline LASER SENSITIVITY CALIBRATION DURING PRODUCTION Suppl. Mat. (1) Motivated by the successful printing of a computer-generated hologram using the computer-to-film (CtF) graphic process, we present a further refined technique with increased resolution, applicable in security. The CtF process offers low cost and fast production while persevering high resolution, and it can make every hologram unique. In this paper, we present the improvement of the printing method, with several software modifications and the implementation of security features at different levels of production. © 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement When printing security features, holograms belong to a group of products known as diffractive optically variable image devices (DOVIDs). They are vital elements in protection against counterfeiting. DOVIDs are manufactured by using expensive and specialized equipment. The production of holograms using unique materials or coatings, which change the polarization of light or selectively filter/scatter the wavelengths of the reference light, can complicate the process of optical copying. However, in addition to the implementation of physical levels of protection, the application of computer-generated holograms (CGHs) in security printing allows the addition of new digital levels of protection that further complicate the process of making counterfeits [1]. When designing security features, it is necessary to ensure that counterfeiting is too expensive in terms of costs, resources, and time [2,3]. In this paper, we will focus on CGH calculation based on a point model cloud [4–6] for security printing. CGH computation is a well-researched problem [7–9]. In this paper, we focus on the implementation of security features at different levels in creating and producing a security element. To broaden the printed CGH applications, we present a new way of calculating CGH by first dividing the hologram plane into smaller segments or holo-blocks and calculating each holo-block with a different perspective [10,11] or with a different model altogether. Since each segment is calculated individually, we control model/image selection, precise object rotation, position, and reconstruction. To achieve the most from a point cloud model, we also present several methods for creating a point model from an input image or an input 3D model. Alongside the proposed new robust computational method, we applied and tested ordinary computer-to-film (CtF) technology to create an affordable and high-resolution print of CGH, avoiding specialized [12,13] and expensive techniques [14]. Motivated by the successful production of a single perspective (far-field) hologram and its reconstruction using CtF, we extended the reach of our research [15] to involve fine calibration on multiple CtF machines using higher resolution (increased to 3600 dpi) and further refined software with new features. CGHs manufactured using the CtF process differ from a classic hologram in such a way that the transmittance is binary [16]. Yet, we can retrieve reconstruction with no significant loss in quality. However, when calculating a binary hologram from a gray-scale hologram, we can dynamically change the threshold to achieve a rasterization-type effect and embed an image or a pattern on the surface [17] of the printed hologram. Considering that the CtF process cost is the same when manufacturing different holograms or manufacturing the same one, new applications in security arise. Generating holograms for mass production and making every hologram unique allows this process to be used in security applications [e.g., saving the identification (ID) number in the hologram while the person's image is embedded with rasterization]. It is possible to combine the holo-blocks and save multiple pieces of information, since every holo-block contains unique data. Limiting a holo-block size to the width of the laser allows for the reconstruction of a single perspective hologram without overlap. 2. METHOD CGH calculation is a well-known and researched problem. However, using the CtF process for their manufacture enables every produced hologram to be different in size, shape, rasterization, and information that it holds. This opens new methods of information embedding and ways to refine the process further. This section will explain every step in the hologram calculation based on point model calculation—from the input file to the final manufactured hologram, with a further explanation of each step of the process and its possibilities. A. Model Position and Rotation The proposed method for the calculation of the CGH uses 3D point cloud models [15]. Point cloud models were used in the previous research but only the 2D projection of a 3D model was saved in the hologram. In this paper, we propose a new way of generating a 3D point model, using an image as the input (A.1), while implementing the existing input of the 3D model. In both cases, the resulting point model has ${N_{{d^\prime}}}$ points with every point defined by radius vector ${\vec r_{pm,i}} = ({x_i},{y_i},{z_i})$ for $i = 1,2,\ldots,{N_{{d^\prime}}}$. For a more explicit representation, we can arrange all of the points from the point cloud model into a matrix $R \in {{\mathbb R}^{3 \times {N_{{d^\prime}}}}}$, containing each point's coordinates, as (1)$$R = \left[{\begin{array}{*{20}{c}}{{x_1}}&\;\;\;{{x_2}}&\;\;\;{\,}&\;\;\;{{x_{{N_{{d^\prime}}}}}}\\{{y_1}}&\;\;\;{{y_2}}&\;\;\; \ddots &\;\;\;{{y_{{N_{{d^\prime}}}}}}\\{{z_1}}&\;\;\;{{z_2}}&\;\;\;{\,}&\;\;\;{{z_{{N_{{d^\prime}}}}}}\end{array}} \right].$$ 1. Point Model of an Input Image Let ${I^{\rm{in}}} \in {{\mathbb R}^{n \times m}}$ be the input gray-scale image and let the ${N_d}$ be a desired number of points in the final 3D point model. In this model, the pixel probability $p(i,j)$ that a point $(i,j)$ will be placed on a pixel depends on the pixel intensity as (2)$$p(i,j) = 1 - {I^{\rm{in}}}(i,j),$$ where ${I^{\rm{in}}}(i,j)$ represents $(i,j)$ pixel of the input image. In this model, a dark pixel (with a value 0) represents an object in the image, and a white pixel (with a value 1) represents the background. Therefore, the sum of all pixel probabilities should equal ${N_d}$. To achieve this, we re-scale the input image ${I_{\rm{in}}}$ using the tricubic interpolation [18] with scaling factor $S$ calculated as (3)$$S = \frac{{{N_d}}}{{\sum\nolimits_{i = 1,j = 1}^{n,m} {p(i,j)}}}$$ and retrieve new re-scaled image $I_{\rm{in}}^{\rm{rs}} \in {{\mathbb R}^{n^\prime \times m^\prime}}$, where $n^\prime = \lceil n/\sqrt{S}\rceil$ and $m^\prime = \lceil m/\sqrt{S}\rceil$. Let us note that after this step, the final number of points ${N_{{d^\prime}}}$ will differ slightly to the desired number ${N_d}$. To be certain that the equation ${N_{{d^\prime}}} = {N_d}$ holds, we can choose a threshold for the image $I_{\rm{in}}^{\rm{rs}}$ such that the final black-and-white image has exactly ${N_d}$ pixels with value zero. While the semi-stochastic method of point model calculation will yield more natural-looking gray-scaled images, both ways (semi-stochastic and the fixed threshold) will perform about the same when presented with black-and-white images. Note that the fixed threshold value will have a shorter computational time, which can be necessary if the point model differs for each printed hologram. An example of a point model can be seen in Fig. 1. Fig. 1. Example of point cloud model creation from an input image with (a) an input gray-scale image, (b) semi-stochastic point model calculation visualization, and (c) the fixed threshold calculation visualization. For both examples, we used ${N_d} = 2000$. Download Full Size | PPT Slide | PDF Fig. 2. Fast Fourier transform (FFT) reconstruction of holograms (a) without the noise (whole domain), (b) cropped reconstruction without the noise, and (c) reconstruction with the implemented noise. Note that the noise added to the cloud point model is exaggerated for better representation. The repeating line artifact is drastically lowered compared to the FFT reconstruction without the noise. If there is no restriction where ${N_{{d^\prime}}} = {N_d}$, but rather ${N_{{d^\prime}}} \approx {N_d}$, a semi-stochastic method can be used, and a point will be placed on a pixel depending on its pixel probability (a pixel with an intensity equal to 0.32 has a pixel probability of 0.68, which means there is a 68% chance that a point will be placed in that position). Using a fixed seed method makes the solution deterministic, repeatable, and adds another layer of security to the final product. Note that if the CGH is used on an ID card, the seed for the semi-stochastic calculation can be linked to the ID number, date of issue, or any other connected value. This method will yield a reconstruction with fewer points on the location of the gray parts on the input image (Fig. 1), and the reconstruction will appear more natural than a black-and-white image reconstruction. We can recalculate the position of the points until the final number of points ${N_{{d^\prime}}}$ is close enough to the desired number ${N_d}$. The reference point is placed in the center of the image to appoint a radius vector to each pixel and save values into a matrix ${R_{{im}}}$. Let us note that the final row of the matrix ${R_{{im}}}$ contains only zeros since the image has only 2D values. B. Noise Introduction and Point Model Size Reconstruction of a point model that has points aligned on a grid creates artifacts on the reconstructed image (Fig. 2). This is emphasized when creating point models from images (since the pixels are formed in a grid of rows and columns) and from the 3D model since the points are placed on the edges. To compensate for this problem, we introduce semi-stochastic noise to the matrices ${R_{\rm{im}}}$. Let us define matrix $U \in {{\rm R}^{3 \times {N_{{d^\prime}}}}}$ as a uniform random matrix, with values in the range $(- 1,1)$, and the maximum physical size of the final model as ${D_{\rm{im}}}$. As we want a point model created from images, we can define a half-width of a pixel ${w_p}$ as (4)$${w_p} = \frac{{{D_{\rm{im}}}}}{{2{\rm max}(n,m)}}$$ and ${Z_d}$ as the wanted depth of the input image. Note that ${w_p}$ and ${Z_d}$ satisfy the inequalities ${w_p} \ll {D_{\rm{im}}}$ and ${Z_d} \ll {D_{\rm{im}}}$. This will reduce the noise while preserving the correct perspective when applying model rotation. Let us recall that the $n,m$ are dimensions of the input image ${I_{\rm{in}}}$. Matrix with the introduced noise $R_{\rm{im}}^1$ is determined as (5)$$R_{\rm{im}}^1 = \left[{\begin{array}{*{20}{c}}{{w_p}}&\;\;0&\;\;0\\0&\;\;{{w_p}}&\;\;0\\0&\;\;0&\;\;{{Z_d}}\end{array}} \right] \times U + \frac{{{D_{\rm{im}}}}}{{2||{\rm max}({R_{\rm{im}}}{{)||}_F}}}{R_{\rm{im}}},$$ where $|| \cdot {||_F}$ stands for Frobenius norm. Note that the matrix $R_{\rm{im}}^1$ contains the final information and can be used to calculate single perspective holograms. C. Point Model Rotation and Perspective Rotation grants further control of the model, allowing for more straightforward positioning and saving more information, increasing the hologram security capabilities. While model rotation is more useful when dealing with a 3D object because it allows us to see the object from any position, the rotation of the point cloud model gathered from images can also be helpful, as we place it perpendicularly to the point source and rotate it slightly. It is an additional way of increasing security or removing discrete transitions between multiple images. Let us define three rotation matrices $O_1(\alpha),O_2(\beta),O_3(\gamma)\in\mathbb{R}^{3\times 3}$ given with (6)$${O_1}(\alpha) = \left[{\begin{array}{*{20}{c}}{\cos \alpha}&\;\;{{-} {\sin} \alpha}&\;\;0\\{\sin \alpha}&\;\;{\cos \alpha}&\;\;0\\0&\;\;0&\;\;1\end{array}} \right],$$ (7)$${O_2}(\beta) = \left[{\begin{array}{*{20}{c}}{\cos \beta}&\;\;0&\;\;{\sin \beta}\\0&\;\;1&\;\;0\\{{-} {\sin} \beta}&\;\;0&\;\;{\cos \beta}\end{array}} \right],$$ (8)$${O_3}(\gamma) = \left[{\begin{array}{*{20}{c}}1&\;\;0&\;\;0\\0&\;\;{\cos \gamma}&\;\;{{-} {\sin} \gamma}\\0&\;\;{\sin \gamma}&\;\;{\cos \gamma}\end{array}} \right],$$ where $\alpha$ controls the yaw of the model, $\beta$ controls the pitch, and $\gamma$ controls the roll. The final 3D point model ${R_{\rm{im}}}(\alpha ,\beta ,\gamma)$ is obtained as (9)$${R_{\rm{im}}}(\alpha ,\beta ,\gamma) = {O_1}(\alpha) \times {O_2}(\beta) \times {O_3}(\gamma) \times R_{\rm{im}}^1.$$ Changing each yaw, pitch, and roll parameter will produce an object with a different perspective (Fig. 3). Fig. 3. Examples of perspectives for the box point cloud model with different values for yaw, pitch, and roll. D. Holo-Block Calculation and Configuration Simple reconstruction of a projection or an image is easy to achieve. However, it lacks features compared to saving a 3D model because the different parts of the hologram gain a different perspective. This can be achieved in classic holograms by placing the model close to the hologram plane. In general, the closer the model is to the plane, the greater the change in perspective. However, this approach still has some limitations, such as the maximum degree of rotation, narrow applicability (only on 3D models), and problems in laser light reconstruction. To utilize the hologram to the full extent, we propose a method that divides the whole hologram into holo-blocks, such that every block or segment is calculated with different model rotations or with a different point cloud model altogether (Fig. 4). Fig. 4. Single perspective and holo-block method examples with (a1), (b1) the whole hologram, (a2), (b2) hologram divided into holo-blocks, and (a3), (b3) FFT of each holo-block. Images labeled with (a) represent a single perspective hologram, while images labeled with (b) represent our method of division and rotation. Fig. 5. Example of image imprinting process using the (a) gray-scale hologram with $1000 \times 1000$ resolution, (b) input image with the resolution of $2 \times 2$, and (c) generating the final binary hologram. The low resolution of the input images is used to emphasize the effect. Let us define $H(R(\alpha ,\beta ,\gamma),{\vec p_s},{\lambda _s}) \in {{\mathbb R}^{{h_1} \times {h_2}}}$ as the final single perspective gray-scale hologram matrix, calculated for model ${R_{\rm{im}}}(\alpha ,\beta ,\gamma)$ at point source located at ${\vec p_s}$ with light wavelength ${\lambda _s}$. Since the wavelength and the point source position is fixed, we will denote the hologram as $H(R(\alpha ,\beta ,\gamma))$. Vertical and horizontal resolutions are given with ${h_1}$ and ${h_2}$, respectively. Further detail, explanation, and examples for the hologram calculation algorithm can be found in our previous research [15]. We can divide the static hologram matrix into multiple holo-blocks $H_b^{\textit{ij}} \in {{\mathbb R}^{({h_1}/f) \times ({h_2}/k)}}$ for $i = 1, \ldots ,f$ and $k = 1, \ldots ,k$, where $f$ and $k$ are the numbers of vertical and horizontal separate holo-blocks, respectively. Since each holo-block is only a fraction of the entire hologram, the resolution of the final hologram stays the same, and the processing time is affected only by model manipulation. Note, however, that the model rotation has an insignificant impact on the calculation time. Fig. 6. Single perspective variable-transmittance hologram (VTH) used for calibration. Holograms in the first row are calculated from the identical gray-scale hologram, differing only in the threshold used to calculate the binary hologram. The second row shows the FFT of the matching hologram, while the last row contains the pictures of the optical reconstruction. An example of this approach is a simple 3D object, such as a cube. We can manipulate the cube between the holo-blocks, so the total yaw and pitch change is $2\pi$. Using this approach, we can view the object, in this case, a cube, from any direction. Note that $2\pi$ is used to emphasize the advantages of this approach, but the total degree of rotation can be both larger and smaller than $2\pi$. However, each holo-block can also contain a different model. For example, we can separate the hologram into $z$ vertical holo-blocks and calculate that each includes a single number of an ID number with length $z$. Furthermore, each of that holo-blocks can be additionally split into multiple holo-blocks, such that every number has a jaw of a few degrees. E. Binarization and Image Imprinting Let us recall that the calculated hologram $H$ is an $h1 \times h2$ gray-scale matrix and that the CtF process can only produce binary holograms. As a final step in the calculation, we need to calculate binary hologram ${H_b}$ from the hologram $H$. While a global constant can be used to achieve this, we can dynamically change the threshold value based on the desired image and imprint another image on the surface of the hologram (Fig. 5). Note that once the minimum and maximum values of the threshold are empirically determined, this will not compromise the reconstruction quality. A printed variable-transmittance hologram (VTH) is used to determine the minimum and maximum values. Each holo-block in VTH differs in how much light passes through a holo-block, ranging from 95% to 5% with 5% decrements (Fig. 6). Printing the VTH allows us to determine the minimum and maximum threshold value by analyzing the optical reconstruction for each level of transmittance. To have a conclusive and repeatable procedure, we used a gradient-based sharpness function [19], given with (10)$$F({T_p}) = |\nabla O({T_p}{)|_2},$$ where ${T_p}$ stands for level of transmittance with values $p = 5,10, \ldots ,95\%$, $F({T_p})$ stands for sharpness function, and $O({T_p})$ stands for the image of optical reconstruction with a ${T_p}$ level of transmittance. Before calculating the sharpness of each image, we normalized each input image. Images of optical reconstruction with 80% or higher level of sharpness compared to the sharped image represent usable levels of transmittance (Fig. 7). Note that the value of 80% is an arbitrary value. Fig. 7. Sharpness function $F({T_p})$ for different values of transmittance. Since the acceptable values have at least 80% of the sharpness of the sharpest image, in this example, holograms with transmittance between 30% and 75% are acceptable. If we take a new input gray-scale image $I_{\rm{im}}^s \in {{\mathbb R}^{{c_1} \times {c_2}}}$ and the hologram $H \in {{\mathbb R}^{{h_1} \times {h_2}}}$, then we can define the dimension ratio ${d_r}$ as (11)$${d_r} = \frac{{{h_1}}}{{{c_1}}} = \frac{{{h_2}}}{{{c_2}}}.$$ Note that the upper equation holds only if the aspect ratio of the input image $I_{\rm{im}}^s$ matches the aspect ratio of the hologram $H$. Otherwise, the input image or the hologram can be cropped so that the proportions match. Tricubic interpolation can again be used to scale the image by $0.1\,dr$. Note that the tricubic interpolation is used to fit any input image, which would not be necessary for professional applications. This is done so that every pixel of the input image $I_{\rm{im}}^s$ is used to determine the threshold of a $10 \times 10$ grid of pixels on the hologram. With this, we can achieve 100 shades of gray. Since the hologram resolution ranges from 2400 dpi to 3600 dpi, the image $I_{\rm{im}}^s$ resolution ranges from 240 dpi to 360 dpi, which is the industry standard for printed images. 3. LASER SENSITIVITY CALIBRATION DURING PRODUCTION Since the CtF process is a new technique for high-resolution CGH printing, to confirm its repeatability and availability, we utilized and tested four different CtF machines working at different resolutions (Table 1). We used two types of films (AGFA Alliance HNm 600BD and AGFA Alliance HND 600BD). Two materials do not differ from each other in any significant way and share the same film sensitivity and minimal element size. Note that every machine and film used in this research can create a high-quality hologram. This means of production is readily available, cheap, and can produce tens or hundreds of unique holograms per minute. Table 1. List of Used Printing Machines and Matching Resolutionsa To evaluate each of the CtF machines, we printed a VTH set (Fig. 6) of binary single perspective holograms calculated for each CtF machine separately. For each machine, multiple laser intensities are tested and measured under an optical microscope to find what intensity best recreates the CGH (Fig. 8). After determining the laser intensity, we used the test set to determine the lowest and highest value of fill percentage that still gives a clear reconstruction. The lowest value is set as the lowest intensity of the image in the image imprinting step, while the highest value corresponds to the highest value in the image (Fig. 6). Fig. 8. Images of a printed hologram taken with a microscope. Labels of the images correspond to the printing machine No., while letters represent different laser power: (a) min, (b) mid, and (c) max. The left part of the image has no preprocessing, while the right part was denoised, and the threshold was set to have binary output. Change in the exposure influences the size of the pixel, meaning that the higher settings will produce a darker hologram with lower transmittance (Fig. 8). Once the noise was removed from images taken by a microscope, we can determine the actual transmittance ${T_m}$ for each laser setting as the ratio of non-zero value pixels ${p_m}$ in the image and the total number of pixels ${p_t}$, (12)$${T_m} = {p_m}/{p_t}.$$ The average for every exposure setting is calculated for five different positions on the hologram and three different input transmittance levels ${T_t}$, consisting of a total of fifteen measurements. To evaluate each exposure, we calculate the match factor $F$ as (13)$$F = \frac{{||{T_t} - {T_m}{{||}_2}}}{{||{T_t}{{||}_2}}} \times 100\% .$$ The final results are given in Table 1. Results show that there is no unique solution. Rather, each machine and accompanying raster image processor (RIP) require calibration for best results. Note that this enables us to include the type of mechanical signature of the machine to the final product, discouraging and complicating any forgery attempt. A. Prepress Differentiation Utilizing the CGH as a security element is enabled because many parameters change their behavior as a security feature. One crucial step in CGH production is the use of the correct parameters for printing prepress. Using inappropriate prepress will result in a poor-quality hologram, since each machine's RIP will change the diffraction grid of the hologram. The complete prepress step is elaborated in our previous work [15]. This was important to achieve a repeatable and high-quality manufacturing process. The 3600 dpi is currently the highest printed resolution used for printing holograms on the CtF process. It is important to note that the identical hologram can be printed on different resolutions. However, for the best results, it is recommended not to utilize interpolation but rather change the physical hologram size to preserve every point. Changing the hologram resolution, and with that its size, will change the size of the reconstructed image as the maximum size ${Y_{{\rm max}}}$ is determined as (14)$${Y_{{\rm max}}}({H_{{\rm res}}};{\lambda _{{\rm in}}},L) = \frac{{{\lambda _{{\rm in}}}L}}{{\sqrt {0.0508/{H_{{\rm res}}} - {\lambda _{{\rm in}}}}}},$$ where ${H_{{\rm res}}}$ is the resolution in dpi, ${\lambda _{{\rm in}}}$ is the lights wavelength, $L$ is the distance from the hologram to the screen, and the number 0.0508 is the arbitrary constant to convert dpi to dpm. This formula is derived from maxima distance in diffraction grating. Note that approximated change in reconstructed image size is given with (15)$${Y_{{\rm max}}}(H_{{\rm res}}^\prime ;{\lambda _{{\rm in}}},L) \approx \sqrt {\frac{{H_{{\rm res}}^\prime}}{{{H_{{\rm res}}}}}} {Y_{{\rm max}}}({H_{{\rm res}}};{\lambda _{{\rm in}}},L).$$ Progressive binarization allows us to imprint an image on the surface of the hologram. However, increasing the hologram reconstruction quality was the first step in making the CtF printed hologram commercially valuable (Fig. 9). Fig. 9. Reconstruction with different levels of computational method and prepress refinement: (a) CGH calculated at a resolution of 2540 dpi with no machine parameters compensation, prepress, or noise introduction, (b) CGH calculated at the resolution of 2540 dpi with machine compensation with no progressive binarization, and (c) CGH calculated at the resolution of 3600 dpi with machine compensation, noise introduction, and progressive binarization. Note that the difference in the reconstruction size (c) and (b) comes from increasing the resolution. The final product, the printed security element, is combining three levels of security: 1. Computer object manipulation and holo-block composition in order to achieve a large number of combinations, 2. Progressive binarization, used to imprint an image onto the hologram surface, and 3. Machine signature, as described in the previous section. Combining the three levels starts with the calculation of the CGH composed of $N$ holo-blocks. Every holo-block contains information for a different object perspective or a different object altogether. Holo-block dimensions of $1\,\,{\rm mm}\, \times \,1\,\,{\rm mm}$ have proven to be enough to create a seamless illusion when looking through the hologram and are large enough to reconstruct only one holo-block using a laser. The proposed CGH contains $N = 1369$ holograms and has the final size of $3.7\,\,{\rm cm}\, \times \,3.7\,\,{\rm cm}$. Since the CGH has gray-scale values after calculation, we use the progressive binarization and imprint an image onto the surface of the CGH. If the correct values for the lowest and the highest transmittance (Fig. 6) are used, reconstruction quality will not degrade when compared to other means of binarization (Fig. 10). Fig. 10. Example of progressive binarization used for image imprinting. (a1), (b1) Input CGH composed of holo-blocks, (a2), (b2) input image, and (a3), (b3) final security element. Two-positions on the (a3) and (b3) are enlarged for better visualization of the rasterization effect. An image imprinted on the CGH by the means of the progressive binarization creates a clear image that is not connected to the point model saved in the CGH (Fig. 11). Fig. 11. Examples for two different images and possible reconstruction based on what part of the hologram is reconstructed. Note that in this example a low number of holo-blocks is used (25 in total per hologram) for clearer visualization. In practice, the number of holo-blocks is a couple of orders of magnitude greater. Expanding the point cloud model calculations to be able to use 2D images and advancing from single perspective hologram calculations opened a new area of security element printing while preserving the low cost and high throughput. Note that once calibrated, CtF machines will produce the same quality prints and will not require frequent maintenance. Final printed and reconstructed results can be seen in Figs. 10–12 and in video format (Visualization 1, [20]). Reconstruction of the model saved in the CGH can be obtained by using a coherent light laser [Figs. 12(a1) and 12(b1)—12(b3)], which enables precise measurements using an optical setup. However, it can also be obtained with a point white light source, enabling a fast security check by using a simple light source such as a smart-phone flashlight [Figs. 12(c1)–12(c3)]. Fig. 12. Example of the final printed and reconstructed hologram. Figure (a1) shows an example of a finished ID containing a total of four holograms. Figures (a2), (a3), (b1), (b2), and (b3) show a monochromatic reconstruction of a single holo-block or a single perspective hologram. Figures (c1), (c2), and (c3) show reconstructions of three separate holo-blocks using a white point light source. Images (b1), (b2), and (b3) contain point cloud objects obtained from images. Note that figure (b2) contains an 11 digit number that can be used to save each person's own ID number, emphasizing the capability of this method to make every hologram unique. When combined, the advantages of the proposed computer method and the acquired information of the CtF machine yield a fast production method for unique CGH that have broad applications in security. The final product's security elements are not just based on a hologram diffraction grating. Instead, the synergy of the three proposed security elements strengthens the security while preserving the ability to be printed for a low price on standard commercial machines. While the manufacturing process is based on a well-known CtF technique, the security basis of this approach lies in the latest advantages of computer science. In future research, we will expand the existing list of machines, increasing the maximum resolution to 6400 dpi or higher, measure the film sensitivity to UV exposure, and experiment with different methods for lamination and their effect on the optical reconstruction. Expanding upon the successfully implemented CGH production on the CtF, we plan to test other production methods, such as the flexo and computer-to-plate process, using parameters and procedures presented in this paper and the previous research. AKD d.o.o. (Ministry of the Interior of the Republic of Croatia); Hrvatska Zaklada za Znanost (HrZZ IP-2019-04-6418). Material and equipment support came from the AKD d.o.o. (Ministry of the Interior of the Republic of Croatia) project "Analysis of the possible application of printed computer-generated holograms as security elements on identification documents." The equipment was supplied by the Croatian Science Foundation project "Laser synthesis of nanoparticles and applications" (HrZZ IP-2019-04-6418), PI: dr. sc. N. Krstulović. Data underlying the results presented in this paper are available in Visualization 1, Ref. [20]. 1. J. Vacca, Holograms & Holography: Design, Techniques, & Commercial Applications (Charles River Media, 2001). 2. M. Sutkowski, M. Kujawinska, and M. T. Stadnik, "Holovideo based on digitally stored holograms," Proc. SPIE 4659, 309–318 (2002). [CrossRef] 3. E. Tajahuerce and B. Javidi, "Encrypting three-dimensional information with digital holography," Appl. Opt. 39, 6595–6601 (2000). [CrossRef] 4. A. Symeonidou, D. Blinder, and P. Schelkens, "Colour computer-generated holography for point clouds utilizing the phong illumination model," Opt. Express 26, 10282–10298 (2018). [CrossRef] 5. U. Schnars and W. Jueptner, Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques (Springer, 2005), pp. 1–164. 6. V. Cviljušac, A. Divjak, and D. Modrić, "Computer generated holograms of 3D Points cloud," Teh. Vjesnik - Tech. Gaz. 25, 1020–1027 (2018). [CrossRef] 7. T. Zhao, J. Liu, Q. Gao, P. He, Y. Han, and Y. Wang, "Accelerating computation of CGH using symmetric compressed look-up-table in color holographic display," Opt. Express 26, 16063–16073 (2018). [CrossRef] 8. Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. A. Tanjung, C. Tan, and T.-C. Chong, "Fast CGH computation using S-LUT on GPU," Opt. Express 17, 18543–18555 (2009). [CrossRef] 9. T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, "Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display," Opt. Express 18, 19504–19509 (2010). [CrossRef] 10. F. Depasse, M. A. Paesler, D. Courjon, and J. M. Vigoureux, "Huygens–Fresnel principle in the near field," Opt. Lett. 20, 234–236 (1995). [CrossRef] 11. X. Zhang, X. Liu, and X. Chen, "Computer-generated holograms for 3D objects using the Fresnel zone plate," Proc. SPIE 5636, 109–115 (2005). [CrossRef] 12. A. J. Lee and D. P. Casasent, "Computer generated hologram recording using a laser printer," Appl. Opt. 26, 136–138 (1987). [CrossRef] 13. L. B. Lesem, P. M. Hirsch, and J. A. Jordan, "The Kinoform: a new wavefront reconstruction device," IBM J. Res. Dev. 13, 150–155 (1969). [CrossRef] 14. J. Su, X. Yan, Y. Huang, X. Jiang, Y. Chen, and T. Zhang, "Progress in the synthetic holographic stereogram printing technique," Appl. Sci. 8, 851 (2018). [CrossRef] 15. V. Cviljušac, A. L. Brkić, A. Divjak, and D. Modrić, "Utilizing standard high-resolution graphic computer-to-film process for computer-generated hologram printing," Appl. Opt. 58, G143–G148 (2019). [CrossRef] 16. P. Tsang, T.-C. Poon, W.-K. Cheung, and J.-P. Liu, "Computer generation of binary Fresnel holography," Appl. Opt. 50, B88–B95 (2011). [CrossRef] 17. C. Martinez, F. Laulagnet, and O. Lemonnier, "Gray tone image watermarking with complementary computer generated holography," Opt. Express 21, 15438–15451 (2013). [CrossRef] 18. F. Lekien and J. Marsden, "Tricubic interpolation in three dimensions," Int. J. Numer. Methods Eng. 63, 455–471 (2005). [CrossRef] 19. M. E. Rudnaya, R. M. M. Mattheij, J. J. Maubach, and M. ter Hg, "Gradient-based sharpness function," J. Manuf. Sci. Eng. 1126, 301–306 (2011). 20. "Visualization of the results (CGH in security printing)," figshare (2021) [CrossRef] . Article Order J. Vacca, Holograms & Holography: Design, Techniques, & Commercial Applications (Charles River Media, 2001). M. Sutkowski, M. Kujawinska, and M. T. Stadnik, "Holovideo based on digitally stored holograms," Proc. SPIE 4659, 309–318 (2002). [Crossref] E. Tajahuerce and B. Javidi, "Encrypting three-dimensional information with digital holography," Appl. Opt. 39, 6595–6601 (2000). A. Symeonidou, D. Blinder, and P. Schelkens, "Colour computer-generated holography for point clouds utilizing the phong illumination model," Opt. Express 26, 10282–10298 (2018). U. Schnars and W. Jueptner, Digital Holography: Digital Hologram Recording, Numerical Reconstruction, and Related Techniques (Springer, 2005), pp. 1–164. V. Cviljušac, A. Divjak, and D. Modrić, "Computer generated holograms of 3D Points cloud," Teh. Vjesnik - Tech. Gaz. 25, 1020–1027 (2018). T. Zhao, J. Liu, Q. Gao, P. He, Y. Han, and Y. Wang, "Accelerating computation of CGH using symmetric compressed look-up-table in color holographic display," Opt. Express 26, 16063–16073 (2018). Y. Pan, X. Xu, S. Solanki, X. Liang, R. B. A. Tanjung, C. Tan, and T.-C. Chong, "Fast CGH computation using S-LUT on GPU," Opt. Express 17, 18543–18555 (2009). T. Shimobaba, H. Nakayama, N. Masuda, and T. Ito, "Rapid calculation algorithm of Fresnel computer-generated-hologram using look-up table and wavefront-recording plane methods for three-dimensional display," Opt. Express 18, 19504–19509 (2010). F. Depasse, M. A. Paesler, D. Courjon, and J. M. Vigoureux, "Huygens–Fresnel principle in the near field," Opt. Lett. 20, 234–236 (1995). X. Zhang, X. Liu, and X. Chen, "Computer-generated holograms for 3D objects using the Fresnel zone plate," Proc. SPIE 5636, 109–115 (2005). A. J. Lee and D. P. Casasent, "Computer generated hologram recording using a laser printer," Appl. Opt. 26, 136–138 (1987). L. B. Lesem, P. M. Hirsch, and J. A. Jordan, "The Kinoform: a new wavefront reconstruction device," IBM J. Res. Dev. 13, 150–155 (1969). J. Su, X. Yan, Y. Huang, X. Jiang, Y. Chen, and T. Zhang, "Progress in the synthetic holographic stereogram printing technique," Appl. Sci. 8, 851 (2018). V. Cviljušac, A. L. Brkić, A. Divjak, and D. Modrić, "Utilizing standard high-resolution graphic computer-to-film process for computer-generated hologram printing," Appl. Opt. 58, G143–G148 (2019). P. Tsang, T.-C. Poon, W.-K. Cheung, and J.-P. Liu, "Computer generation of binary Fresnel holography," Appl. Opt. 50, B88–B95 (2011). C. Martinez, F. Laulagnet, and O. Lemonnier, "Gray tone image watermarking with complementary computer generated holography," Opt. Express 21, 15438–15451 (2013). F. Lekien and J. Marsden, "Tricubic interpolation in three dimensions," Int. J. Numer. Methods Eng. 63, 455–471 (2005). M. E. Rudnaya, R. M. M. Mattheij, J. J. Maubach, and M. ter Hg, "Gradient-based sharpness function," J. Manuf. Sci. Eng. 1126, 301–306 (2011). "Visualization of the results (CGH in security printing)," figshare (2021). Blinder, D. Brkic, A. L. Casasent, D. P. Chen, X. Cheung, W.-K. Chong, T.-C. Courjon, D. Cviljušac, V. Depasse, F. Divjak, A. Gao, Q. Han, Y. He, P. Hirsch, P. M. Ito, T. Javidi, B. Jiang, X. Jordan, J. A. Jueptner, W. Kujawinska, M. Laulagnet, F. Lee, A. J. Lekien, F. Lemonnier, O. Lesem, L. B. Liang, X. Liu, J.-P. Liu, X. Marsden, J. Martinez, C. Masuda, N. Mattheij, R. M. M. Maubach, J. J. Modric, D. Nakayama, H. Paesler, M. A. Pan, Y. Poon, T.-C. Rudnaya, M. E. Schelkens, P. Schnars, U. Shimobaba, T. Solanki, S. Stadnik, M. T. Su, J. Sutkowski, M. Symeonidou, A. Tajahuerce, E. Tan, C. Tanjung, R. B. A. ter Hg, M. Tsang, P. Vacca, J. Vigoureux, J. M. Wang, Y. Xu, X. Yan, X. Zhang, T. Zhang, X. Zhao, T. Appl. Opt. (4) Appl. Sci. (1) IBM J. Res. Dev. (1) Int. J. Numer. Methods Eng. (1) J. Manuf. Sci. Eng. (1) Opt. Express (5) Opt. Lett. (1) Proc. SPIE (2) Teh. Vjesnik - Tech. Gaz. (1) Supplementary Material (1) Visualization 1 For a clearer understanding, we recorded the results of our scientific work. The video shows the optical reconstruction of printed computer-generated holograms with application in security printing. Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here. Alert me when this article is cited. Click here to see a list of articles that cite this paper Fig. 1. Example of point cloud model creation from an input image with (a) an input gray-scale image, (b) semi-stochastic point model calculation visualization, and (c) the fixed threshold calculation visualization. For both examples, we used ${N_d} = 2000$ . View in Article | Download Full Size | PPT Slide | PDF Fig. 5. Example of image imprinting process using the (a) gray-scale hologram with $1000 \times 1000$ resolution, (b) input image with the resolution of $2 \times 2$ , and (c) generating the final binary hologram. The low resolution of the input images is used to emphasize the effect.
CommonCrawl
I am not alone in thinking of the potential benefits of smart drugs in the military. In their popular novel Ghost Fleet: A Novel of the Next World War, P.W. Singer and August Cole tell the story of a future war using drug-like nootropic implants and pills, such as Modafinil. DARPA is also experimenting with neurological technology and enhancements such as the smart drugs discussed here. As demonstrated in the following brain initiatives: Targeted Neuroplasticity Training (TNT), Augmented Cognition, and High-quality Interface Systems such as their Next-Generational Nonsurgical Neurotechnology (N3). Our 2nd choice for a Brain and Memory supplement is Clari-T by Life Seasons. We were pleased to see that their formula included 3 of the 5 necessary ingredients Huperzine A, Phosphatidylserine and Bacopin. In addition, we liked that their product came in a vegetable capsule. The product contains silica and rice bran, though, which we are not sure is necessary. Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated 5/25/18) and Privacy Policy and Cookie Statement (updated 5/25/18). Your California Privacy Rights. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. As already mentioned, AMPs and MPH are classified by the U.S. Food and Drug Administration (FDA) as Schedule II substances, which means that buying or selling them is a felony offense. This raises the question of how the drugs are obtained by students for nonmedical use. Several studies addressed this question and yielded reasonably consistent answers. Cost-wise, the gum itself (~$5) is an irrelevant sunk cost and the DNB something I ought to be doing anyway. If the results are negative (which I'll define as d<0.2), I may well drop nicotine entirely since I have no reason to expect other forms (patches) or higher doses (2mg+) to create new benefits. This would save me an annual expense of ~$40 with a net present value of <820 ($); even if we count the time-value of the 20 minutes for the 5 DNB rounds over 48 days (0.2 \times 48 \times 7.25 = 70), it's still a clear profit to run a convincing experiment. A large review published in 2011 found that the drug aids with the type of memory that allows us to explicitly remember past events (called long-term conscious memory), as opposed to the type that helps us remember how to do things like riding a bicycle without thinking about it (known as procedural or implicit memory.) The evidence is mixed on its effect on other types of executive function, such as planning or ability on fluency tests, which measure a person's ability to generate sets of data—for example, words that begin with the same letter. We hope you find our website to be a reliable and valuable resource in your search for the most effective brain enhancing supplements. In addition to product reviews, you will find information about how nootropics work to stimulate memory, focus, and increase concentration, as well as tips and techniques to help you experience the greatest benefit for your efforts. I take my piracetam in the form of capped pills consisting (in descending order) of piracetam, choline bitartrate, anhydrous caffeine, and l-tyrosine. On 8 December 2012, I happened to run out of them and couldn't fetch more from my stock until 27 December. This forms a sort of (non-randomized, non-blind) short natural experiment: did my daily 1-5 mood/productivity ratings fall during 8-27 December compared to November 2012 & January 2013? The graphed data28 suggests to me a decline: Many people find it difficult to think clearly when they are stressed out. Ongoing stress leads to progressive mental fatigue and an eventual breakdown. Luckily, there are several ways that nootropics can help relieve stress. One is through the natural promotion of feelings of relaxation and the other is by replenishing the brain chemicals drained by stress. Similarly, Mehta et al 2000 noted that the positive effects of methylphenidate (40 mg) on spatial working memory performance were greatest in those volunteers with lower baseline working memory capacity. In a study of the effects of ginkgo biloba in healthy young adults, Stough et al 2001 found improved performance in the Trail-Making Test A only in the half with the lower verbal IQ. I have no particularly compelling story for why this might be a correlation and not causation. It could be placebo, but I wasn't expecting that. It could be selection effect (days on which I bothered to use the annoying LED set are better days) but then I'd expect the off-days to be below-average and compared to the 2 years of trendline before, there doesn't seem like much of a fall. There is evidence to suggest that modafinil, methylphenidate, and amphetamine enhance cognitive processes such as learning and working memory...at least on certain laboratory tasks. One study found that modafinil improved cognitive task performance in sleep-deprived doctors. Even in non-sleep deprived healthy volunteers, modafinil improved planning and accuracy on certain cognitive tasks. Similarly, methylphenidate and amphetamine also enhanced performance of healthy subjects in certain cognitive tasks. Many over the counter and prescription smart drugs fall under the category of stimulants. These substances contribute to an overall feeling of enhanced alertness and attention, which can improve concentration, focus, and learning. While these substances are often considered safe in moderation, taking too much can cause side effects such as decreased cognition, irregular heartbeat, and cardiovascular problems. Bacopa Monnieri is probably one of the safest and most effective memory and mood enhancer nootropic available today with the least side-effects. In some humans, a majorly extended use of Bacopa Monnieri can result in nausea. One of the primary products of AlternaScript is Optimind, a nootropic supplement which mostly constitutes of Bacopa Monnieri as one of the main ingredients. ATTENTION CANADIAN CUSTOMERS: Due to delays caused by it's union's ongoing rotating strikes, Canada Post has suspended its delivery standard guarantees for parcel services. This may cause a delay in the delivery of your shipment unless you select DHL Express or UPS Express as your shipping service. For more information or further assistance, please visit the Canada Post website. Thank you. "Cavin's enthusiasm and drive to help those who need it is unparalleled! He delivers the information in an easy to read manner, no PhD required from the reader. 🙂 Having lived through such trauma himself he has real empathy for other survivors and it shows in the writing. This is a great read for anyone who wants to increase the health of their brain, injury or otherwise! Read it!!!" Today piracetam is a favourite with students and young professionals looking for a way to boost their performance, though decades after Giurgea's discovery, there still isn't much evidence that it can improve the mental abilities of healthy people. It's a prescription drug in the UK, though it's not approved for medical use by the US Food and Drug Administration and can't be sold as a dietary supplement either. One last note on tolerance; after the first few days of using smart drugs, just like with other drugs, you may not get the same effects as before. You've just experienced the honeymoon period. This is where you feel a large effect the first few times, but after that, you can't replicate it. Be careful not to exceed recommended doses, and try cycling to get the desired effects again. It's basic economics: the price of a good must be greater than cost of producing said good, but only under perfect competition will price = cost. Otherwise, the price is simply whatever maximizes profit for the seller. (Bottled water doesn't really cost $2 to produce.) This can lead to apparently counter-intuitive consequences involving price discrimination & market segmentation - such as damaged goods which are the premium product which has been deliberately degraded and sold for less (some Intel CPUs, some headphones etc.). The most famous examples were railroads; one notable passage by French engineer-economist Jules Dupuit describes the motivation for the conditions in 1849: But perhaps the biggest difference between Modafinil and other nootropics like Piracetam, according to Patel, is that Modafinil studies show more efficacy in young, healthy people, not just the elderly or those with cognitive deficits. That's why it's great for (and often prescribed to) military members who are on an intense tour, or for those who can't get enough sleep for physiological reasons. One study, by researchers at Imperial College London, and published in Annals of Surgery, even showed that Modafinil helped sleep-deprived surgeons become better at planning, redirecting their attention, and being less impulsive when making decisions. The majority of nonmedical users reported obtaining prescription stimulants from a peer with a prescription (Barrett et al., 2005; Carroll et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; McCabe & Boyd, 2005; Novak et al., 2007; Rabiner et al., 2009; White et al., 2006). Consistent with nonmedical user reports, McCabe, Teter, and Boyd (2006) found 54% of prescribed college students had been approached to divert (sell, exchange, or give) their medication. Studies of secondary school students supported a similar conclusion (McCabe et al., 2004; Poulin, 2001, 2007). In Poulin's (2007) sample, 26% of students with prescribed stimulants reported giving or selling some of their medication to other students in the past month. She also found that the number of students in a class with medically prescribed stimulants was predictive of the prevalence of nonmedical stimulant use in the class (Poulin, 2001). In McCabe et al.'s (2004) middle and high school sample, 23% of students with prescriptions reported being asked to sell or trade or give away their pills over their lifetime. "We stumbled upon fasting as a way to optimize cognition and make yourself into a more efficient human being," says Manuel Lam, an internal medicine physician who advises Nootrobox on clinical issues. He and members of the company's executive team have implanted glucose monitors in their arms — not because they fear diabetes but because they wish to track the real-time effect of the foods they eat. Ethical issues also arise with the use of drugs to boost brain power. Their use as cognitive enhancers isn't currently regulated. But should it be, just as the use of certain performance-enhancing drugs is regulated for professional athletes? Should universities consider dope testing to check that students aren't gaining an unfair advantage through drug use? An unusual intervention is infrared/near-infrared light of particular wavelengths (LLLT), theorized to assist mitochondrial respiration and yielding a variety of therapeutic benefits. Some have suggested it may have cognitive benefits. LLLT sounds strange but it's simple, easy, cheap, and just plausible enough it might work. I tried out LLLT treatment on a sporadic basis 2013-2014, and statistically, usage correlated strongly & statistically-significantly with increases in my daily self-ratings, and not with any sleep disturbances. Excited by that result, I did a randomized self-experiment 2014-2015 with the same procedure, only to find that the causal effect was weak or non-existent. I have stopped using LLLT as likely not worth the inconvenience. as scientific papers become much more accessible online due to Open Access, digitization by publishers, and cheap hosting for pirates, the available knowledge about nootropics increases drastically. This reduces the perceived risk by users, and enables them to educate themselves and make much more sophisticated estimates of risk and side-effects and benefits. (Take my modafinil page: in 1997, how could an average person get their hands on any of the papers available up to that point? Or get detailed info like the FDA's prescribing guide? Even assuming they had a computer & Internet?) Remembering what Wedrifid told me, I decided to start with a quarter of a piece (~1mg). The gum was pretty tasteless, which ought to make blinding easier. The effects were noticeable around 10 minutes - greater energy verging on jitteriness, much faster typing, and apparent general quickening of thought. Like a more pleasant caffeine. While testing my typing speed in Amphetype, my speed seemed to go up >=5 WPM, even after the time penalties for correcting the increased mistakes; I also did twice the usual number without feeling especially tired. A second dose was similar, and the third dose was at 10 PM before playing Ninja Gaiden II seemed to stop the usual exhaustion I feel after playing through a level or so. (It's a tough game, which I have yet to master like Ninja Gaiden Black.) Returning to the previous concern about sleep problems, though I went to bed at 11:45 PM, it still took 28 minutes to fall sleep (compared to my more usual 10-20 minute range); the next day I use 2mg from 7-8PM while driving, going to bed at midnight, where my sleep latency is a more reasonable 14 minutes. I then skipped for 3 days to see whether any cravings would pop up (they didn't). I subsequently used 1mg every few days for driving or Ninja Gaiden II, and while there were no cravings or other side-effects, the stimulation definitely seemed to get weaker - benefits seemed to still exist, but I could no longer describe any considerable energy or jitteriness. A synthetic derivative of Piracetam, aniracetam is believed to be the second most widely used nootropic in the Racetam family, popular for its stimulatory effects because it enters the bloodstream quickly. Initially developed for memory and learning, many anecdotal reports also claim that it increases creativity. However, clinical studies show no effect on the cognitive functioning of healthy adult mice. Ngo has experimented with piracetam himself ("The first time I tried it, I thought, 'Wow, this is pretty strong for a supplement.' I had a little bit of reflux, heartburn, but in general it was a cognitive enhancer. . . . I found it helpful") and the neurotransmitter DMEA ("You have an idea, it helps you finish the thought. It's for when people have difficulty finishing that last connection in the brain"). Powders are good for experimenting with (easy to vary doses and mix), but not so good for regular taking. I use OO gel capsules with a Capsule Machine: it's hard to beat $20, it works, it's not that messy after practice, and it's not too bad to do 100 pills. However, I once did 3kg of piracetam + my other powders, and doing that nearly burned me out on ever using capsules again. If you're going to do that much, something more automated is a serious question! (What actually wound up infuriating me the most was when capsules would stick in either the bottom or top try - requiring you to very gingerly pull and twist them out, lest the two halves slip and spill powder - or when the two halves wouldn't lock and you had to join them by hand. In contrast: loading the gel caps could be done automatically without looking, after some experience.) Unfortunately, cognitive enhancement falls between the stools of research funding, which makes it unlikely that such research programs will be carried out. Disease-oriented funders will, by definition, not support research on normal healthy individuals. The topic intersects with drug abuse research only in the assessment of risk, leaving out the study of potential benefits, as well as the comparative benefits of other enhancement methods. As a fundamentally applied research question, it will not qualify for support by funders of basic science. The pharmaceutical industry would be expected to support such research only if cognitive enhancement were to be considered a legitimate indication by the FDA, which we hope would happen only after considerably more research has illuminated its risks, benefits, and societal impact. Even then, industry would have little incentive to delve into all of the issues raised here, including the comparison of drug effects to nonpharmaceutical means of enhancing cognition. As it happens, these are areas I am distinctly lacking in. When I first began reading about testosterone I had no particular reason to think it might be an issue for me, but it increasingly sounded plausible, an aunt independently suggested I might be deficient, a biological uncle turned out to be severely deficient with levels around 90 ng/dl (where the normal range for 20-49yo males is 249-839), and finally my blood test in August 2013 revealed that my actual level was 305 ng/dl; inasmuch as I was 25 and not 49, this is a tad low. "Love this book! Still reading and can't wait to see what else I learn…and I am not brain injured! Cavin has already helped me to take steps to address my food sensitivity…seems to be helping and I am only on day 5! He has also helped me to help a family member who has suffered a stroke. Thank you Cavin, for sharing all your knowledge and hard work with us! This book is for anyone that wants to understand and implement good nutrition with all the latest research to back it up. Highly recommend!" What if you could simply take a pill that would instantly make you more intelligent? One that would enhance your cognitive capabilities including attention, memory, focus, motivation and other higher executive functions? If you have ever seen the movie Limitless, you have an idea of what this would look like—albeit the exaggerated Hollywood version. The movie may be fictional but the reality may not be too far behind. Most diehard nootropic users have considered using racetams for enhancing brain function. Racetams are synthetic nootropic substances first developed in Russia. These smart drugs vary in potency, but they are not stimulants. They are unlike traditional ADHD medications (Adderall, Ritalin, Vyvanse, etc.). Instead, racetams boost cognition by enhancing the cholinergic system. Feeling behind, I resolved to take some armodafinil the next morning, which I did - but in my hurry I failed to recall that 200mg armodafinil was probably too much to take during the day, with its long half life. As a result, I felt irritated and not that great during the day (possibly aggravated by some caffeine - I wish some studies would be done on the possible interaction of modafinil and caffeine so I knew if I was imagining it or not). Certainly not what I had been hoping for. I went to bed after midnight (half an hour later than usual), and suffered severe insomnia. The time wasn't entirely wasted as I wrote a short story and figured out how to make nicotine gum placebos during the hours in the dark, but I could have done without the experience. All metrics omitted because it was a day usage. Though their product includes several vitamins including Bacopa, it seems to be missing the remaining four of the essential ingredients: DHA Omega 3, Huperzine A, Phosphatidylserine and N-Acetyl L-Tyrosine. It missed too many of our key criteria and so we could not endorse this product of theirs. Simply, if you don't mind an insufficient amount of essential ingredients for improved brain and memory function and an inclusion of unwanted ingredients – then this could be a good fit for you. The surveys just reviewed indicate that many healthy, normal students use prescription stimulants to enhance their cognitive performance, based in part on the belief that stimulants enhance cognitive abilities such as attention and memorization. Of course, it is possible that these users are mistaken. One possibility is that the perceived cognitive benefits are placebo effects. Another is that the drugs alter students' perceptions of the amount or quality of work accomplished, rather than affecting the work itself (Hurst, Weidner, & Radlow, 1967). A third possibility is that stimulants enhance energy, wakefulness, or motivation, which improves the quality and quantity of work that students can produce with a given, unchanged, level of cognitive ability. To determine whether these drugs enhance cognition in normal individuals, their effects on cognitive task performance must be assessed in relation to placebo in a masked study design. …Phenethylamine is intrinsically a stimulant, although it doesn't last long enough to express this property. In other words, it is rapidly and completely destroyed in the human body. It is only when a number of substituent groups are placed here or there on the molecule that this metabolic fate is avoided and pharmacological activity becomes apparent. "Cavin, you are phemomenal! An incredulous journey of a near death accident scripted by an incredible man who chose to share his knowledge of healing his own broken brain. I requested our public library purchase your book because everyone, those with and without brain injuries, should have access to YOUR brain and this book. Thank you for your legacy to mankind!" "In the hospital and ICU struggles, this book and Cavin's experience are golden, and if we'd have had this book's special attention to feeding tube nutrition, my son would be alive today sitting right here along with me saying it was the cod liver oil, the fish oil, and other nutrients able to be fed to him instead of the junk in the pharmacy tubes, that got him past the liver-test results, past the internal bleeding, past the brain difficulties controlling so many response-obstacles back then. Back then, the 'experts' in rural hospitals were unwilling to listen, ignored my son's unexpected turnaround when we used codliver oil transdermally on his sore skin, threatened instead to throw me out, but Cavin has his own proof and his accumulated experience in others' journeys. Cavin's boxed areas of notes throughout the book on applying the brain nutrient concepts in feeding tubes are powerful stuff, details to grab onto and run with… hammer them! More than once I have seen results indicating that high-IQ types benefit the least from random nootropics; nutritional deficits are the premier example, because high-IQ types almost by definition suffer from no major deficiencies like iodine. But a stimulant modafinil may be another such nootropic (see Cognitive effects of modafinil in student volunteers may depend on IQ, Randall et al 2005), which mentions: Modafinil is not addictive, but there may be chances of drug abuse and memory impairment. This can manifest in people who consume it to stay up for way too long; as a result, this would probably make them ill. Long-term use of Modafinil may reduce plasticity and may harm the memory of some individuals. Hence, it is sold only on prescription by a qualified physician. ^ Sattler, Sebastian; Mehlkop, Guido; Graeff, Peter; Sauer, Carsten (February 1, 2014). "Evaluating the drivers of and obstacles to the willingness to use cognitive enhancement drugs: the influence of drug characteristics, social environment, and personal characteristics". Substance Abuse Treatment, Prevention, and Policy. 9 (1): 8. doi:10.1186/1747-597X-9-8. ISSN 1747-597X. PMC 3928621. PMID 24484640.
CommonCrawl
Buffer-aided relaying improves both throughput and end-to-end delay Javad Hajipour1, Rukhsana Ruby1, Amr Mohamed2 & Victor C. M. Leung1 Buffer-aided relaying has recently attracted a lot of attention due to the improvement in the system throughput. However, a side effect usually deemed is that buffering at relay nodes results in the increase of packet delays. In this paper, we study the effect of buffering at relays on the end-to-end delay of users' data, from the time they arrive at the source until delivery to the destination. We use simple discussions to provide an insight on the overall waiting time of the packets in the system, taking into account the queue dynamics both in the source and relay. We analyze the end-to-end delay in the relay networks with Bernoulli data arrivals and channel conditions and prove that the data packets experience lower average end-to-end delay in the buffer-aided relaying system compared with the conventional one. Moreover, using intuitive generalizations, we conclude that the use of buffers at relays improves not only throughput but ironically the average end-to-end packet delay. Through extensive simulations, we validate our analytical results for the system when the data arrival and channel condition processes follow Bernoulli distribution. Furthermore, via the simulations under the settings of practical systems, we confirm our intuition for the general scenarios. Wireless relays have received significant attention in the past decade because of their capability to enhance the capacity and coverage of wireless networks. The authors in [1] have investigated the bounds on the ergodic and outage capacities for wireless relay channels in Rayleigh fading environment. Similarly, Azarian et al. [2] have studied amplify-and-forward (AF) and decode-and-forward (DF) relay channels and proposed variants of these protocols for reaching the bounds on achievable diversity-multiplexing trade-offs. Employing wireless relays in contemporary cellular networks is also considered as a promising solution for meeting the growing demands of users in these systems, due to the cost-effective and fast deployment possibility of relay stations [3]. Therefore, there has been extensive research in this area to identify the challenges and address them accordingly [4–7]. In particular, Ng et al. [4] studied resource allocation in an orthogonal frequency division multiple access (OFDMA)-based system with AF relays and proposed optimal subchannel and power allocation for maximizing the system goodput. In [5], the authors investigated joint relay selection and resource allocation taking link asymmetry and imperfect channel state information (CSI) into account. Zhang et al. [6, 7] studied resource allocation to provide quality of service (QoS) for the users with minimum rate or maximum packet delay requirements. Usually in the literature in this area, it is assumed that relaying procedure is performed in two consecutive subslots of a transmission interval; i.e., in the first subslot, the base station (BS) transmits to the relay and in the second one, the relay forwards the received data to the mobile terminal. We refer to this method as "conventional relaying" in which the end-to-end transmission rate in each transmission interval is limited by the poorest link quality. Recently, it has been shown that using buffer in the relay node can improve the system throughput [8–11]. This is achieved due to the fact that the buffering capability allows the relay to store packets when its channel condition is bad and transmit when it is good. Motivated by this, several other works have studied buffer-aided relaying scheme in different areas [12–16]. While Krikidis et al. [12] have studied adaptive relay link selection in a single-source multi-relay system, the authors of [13] have investigated that for a multi-source multi-relay scenario. On the other hand, Ahmed et al. [14] have discussed the advantages of buffer-aided relaying for the operation of nodes with energy harvesting capability. Moreover, Liu et al. and Darabi et al. [15, 16] have confirmed the advantage of using buffer-aided relays in two-way relaying and cognitive radio networks, respectively. Any improvement in a system usually comes at a cost. In the case of buffer-aided relaying, the cost is usually deemed to be the increase in packet delays due to queueing in the relay. Consequently, the works in [8–11] have tried to investigate and discuss the trade-off between throughput and delay. This is however based on the assumption of infinitely backlogged buffers in the source (i.e., BS) and considering the queueing delay only at the relay buffer without taking into account the queue dynamics at the BS. In this paper, we aim at filling the abovementioned gap by taking into account the queue dynamics both in the source node and the relay node. Whereas the existing literature [8–11] mostly considers packet delay as the time delay between packet transmission (departure) at the source node and reception (arrival) at the destination node, in this paper, we study end-to-end packet delay, i.e., the delay that data packets experience since their arrival at the source node until reception at the destination node. The difference between the end-to-end delay considered in this paper and the delay investigated in the aforementioned works is that the end-to-end delay includes both the queueing delay that data packets experience in the source node and the time interval between their transmission at the source and reception at the destination. Noting that the delay perceived by the end user is affected by the queueing at both the BS and the relay, in this paper, we investigate the effect of buffer-aided relaying on the end-to-end packet delay. For this, we first provide simple reasoning and discuss the cause of queue formation in a simple queueing system. Based on that, we provide an insight on the delay performance in the buffer-aided and conventional relaying systems. Then, we study the delay performance when data arrival and channel condition processes of the system follow Bernoulli distribution and derive closed form expressions for the average end-to-end packet delay. Using these, we prove that the buffer-aided relaying system incurs lower average end-to-end packet delay compared with the conventional one. Finally, we discuss general scenarios and based on intuitive discussions, we conclude that buffering at relays improves the system throughput as well as the average end-to-end packet delay. Using extensive simulations, we verify our analysis and demonstrate the validity of the presented perspective. To the best of our knowledge, this is the first work that discusses the effect of buffering at relays on the overall waiting time in a relay-based network and provides the above conclusion and insight. We note that the discussions in this paper assume infinite buffer capacities in the BS and relay. However, the insights provided can be used in future works to study the scenarios with limited buffer capacities, where the buffer overflow events can also affect the end-to-end packet delays due to the need for repeated transmissions. In such scenarios, for reducing buffer overflow incidents, a buffer-aware source rate control mechanism can be exploited to adjust the traffic load in the network [17]. Also, buffer-aware resource allocation methods similar to [18] can be employed to efficiently serve the system queues. Then, considering the discussions presented in this paper as well as the probability of repeated transmissions, the end-to-end packet delay can be investigated for conventional and buffer-aided relaying systems with finite buffer capacities in the BS and relay. The rest of this paper is organized as follows. Section 2 provides a background on the queueing delay based on a simple queueing system. In Section 3, through the mathematical analysis and generalized intuitions, we study the end-to-end delay performance of conventional and buffer-aided relaying systems. We validate our analytical study and provide the results for general scenarios through the extensive simulations in Section 4, and finally Section 5 concludes the paper. In this section, we study a simple queueing system and discuss the cause of packet delays to provide a basis for the next section, which studies the end-to-end packet delay in relaying networks. Let us consider a single buffer, as shown in Fig. 1, which is fed by a deterministic data arrival process and served by a single server. We assume that time is divided into slots with equal lengths, indexed by t∈{1,2,…}. The total number of data packets that arrive at the buffer is N. Starting from t=1, one packet arrives per time slot. Therefore, the last packet arrives at t=N. For simplicity, we assume that the arrivals occur at the beginning of time slots. The server might be active or inactive in each time slot. When it is active, it can serve only one packet per time slot, where the service implies delivering the packet successfully to the destination. When it is inactive, no packet is served. Simple queueing system We note that if the server is active in each time slot t∈{1,…,N}, each packet will be served immediately after its arrival. In this case, there is no queue formed in the buffer and consequently, each packet experiences an overall delay of one time slot, which is due to the time spent in the server. Accordingly, the packets will arrive in the destination at the beginning of time slots t∈{2,…,N+1}. However, if the server is inactive in the first time slot, the first packet has to wait in the buffer until time slot 2, to get served. Then, in time slot 2, when the second packet arrives, the server is busy with serving the first packet. Therefore, the second packet also experiences one slot delay in the queue and one slot delay in the server. In a similar manner, all the following packets incur the same queueing and service delays. In other words, the delayed operation of the server causes the nonzero queueing delay for the first packet, which is transferred to the subsequent packets as well. Based on the above discussion, if the server is inactive in time slot x∈{1,…,N}, it adds one slot to the queueing delay (and the overall waiting time) of every packet arrived in slot x or afterward. In general, the packet which arrived in time slot t will experience a queueing delay of n t and will be delivered at time slot t+n t +1, where n t indicates the number of slots before and including t in which the server was inactive. It is clear that the cause of queue formation in such systems is the interruption in the operation of the server, which is translated to queueing delays of the data packets. Effect of buffer-aided relaying on the end-to-end packet delay In this section, first we assume that data arrive in a deterministic manner and the availability of the channels follows Bernoulli distribution and provide an insight on the end-to-end delay performance for conventional and buffer-aided relaying systems. Then, we analytically derive the average end-to-end packet delay for these systems, in the case that both the data arrival process and the availability of the channels follow Bernoulli distribution. Finally, we discuss general cases and present the intuitions about the end-to-end delay performance. Relaying systems with deterministic data arrivals and Bernoulli channel conditions Let us consider a relay network, with one source node, i.e., the BS, one relay node and one destination (or user) node, where the relay works based on the DF technique. It is assumed that there is no direct link between the BS and the user, and the transmissions are done only through the relay. There is only one channel in the system, which can be used for transmissions either from the BS to the relay or from the relay to the user. We use c 1 and c 2 to indicate the BS channel condition (for the link between the BS and relay) and relay channel condition (for the link between the relay and user), respectively. These variables can be either "Good" or "Bad", meaning respectively that it is possible to transmit one or zero packet successfully on the corresponding channel. It is assumed that the channel conditions remain constant during each time slot but vary independently from one time slot to another. The probability of being "Good" is s 1 and s 2 for the BS and relay channel conditions, respectively. We assume that each time slot is further divided into two subslots, where the BS and relay can transmit a packet in the first and second subslots, respectively. The reason for considering subslots is stated later in Remark 1. Figure 2 a shows the queueing model for a conventional relaying system, where the relay does not have buffer and therefore, if it receives a packet in a subslot, it has to transmit it immediately in the next subslot. The server 1 and server 2 indicate the wireless channel from the BS to relay and from the relay to user, respectively. On the other hand, Fig. 2 b indicates a relaying network, where the relay has a buffer which allows it to store the data packets and transmit whenever its channel is good. In both of the figures, the rectangle enclosed around the servers is to abstract the overall serving behavior of the system from the time that the BS starts to transmit a data packet until it is delivered to the user. Note that the works in [8–11] in fact study the delay by considering only the time a packet spends inside this rectangle and do not take into account the waiting time in the BS queue, as they assume infinitely backlogged buffer for the BS. However, in practice, the packets arrive finitely at the BS and experience a queueing delay before the transmission from the BS to the relay. Therefore, the overall waiting time of a packet in the relay network, which we also refer to as end-to-end delay, includes both its waiting time in the BS queue and the time it spends inside the aforementioned rectangle (i.e., the time period between the transmission at the BS and successful reception at the destination). Note that the data arrivals at the BS might be due the packet generation in the BS itself, in which case the end-to-end packet delay might be referred to as "generation-to-delivery packet delay"; or it might be due to exogenous packet arrivals from an external network (e.g., the Internet), in which case the end-to-end packet delay might be referred to as "enter-to-exit packet delay" to specify the portion of the delay that a packet experiences since it enters the relay network until it exits it in the destination. In this paper, for simplicity, we will use the term "end-to-end" instead of "generation-to-delivery" or "enter-to-exit," to specify the delay from the time that a packet arrives at the BS buffer until it is delivered to the destination. Queueing model of (a) conventional relaying system and (b) buffer-aided relaying system In the following, we consider the data arrivals at the BS buffer as the deterministic process, with N packets, mentioned in the previous section. Taking the overall service behavior of the systems into account and based on the previous section, we discuss the overall waiting time of data packets in both the conventional and buffer-aided relaying systems. Table 1 shows the different states for the joint conditions of the BS and relay channels, in which G and B indicate "Good" and "Bad" conditions, respectively. We assume that a central scheduler at the relay has the CSI in each time slot, using the pilot signals transmitted by the BS and destination through error-free control channels at the beginning of that time slot. Based on the CSI and the buffering capability of relay, the scheduler decides about the packet transmissions over the links and notifies the BS and the destination accordingly. These are explained in detail in the following. Table 1 Joint channel condition probabilities In the case of conventional relaying (no buffer in the relay), only when c 1 c 2=G G, the scheduler notifies the BS to transmit a packet in the first subslot. Then, the relay forwards the received packet to the destination in the second subslot. In the other three cases, i.e., when one or both of c 1 and c 2 are "Bad," the packets remain in the BS buffer and are not transmitted. Therefore, conventional relaying serves the packets with the probability of s=s 1 s 2 in each time slot. Consequently, based on the discussions in the previous section, the overall server in the system is inactive with the probability of $${} u_{nb}=P(GB)+P(BG)+P(BB)=1-s=1-s_{1}s_{2} $$ where u nb indicates the interruption probability for the overall server in the system without buffering at the relay. Considering this and the discussions in the previous section, in each time slot, the probability of "increase of one slot" in the overall waiting time of the packets present in that time slot or arrived after that is u nb =1−s 1 s 2. Here, the increase in the overall waiting time is due to the increase in the BS queueing delay of those packets. Remark 1. Now, we explain the reason for considering subslots in each time slot. First note that the conventional relaying protocol stated above takes into account the CSI to decide about the packet transmission. This is reasonable as the CSI is assumed to be available at the scheduler, irrespective of exploiting buffer or not in the relay, and using it can prevent information loss in the case that successful delivery of packet is not possible (i.e., either one or both of the channel conditions are "Bad"). Second, since the channel conditions might vary from one time slot to another, the scheduler does not know what the CSI will be in the next time slot. Therefore, the CSI obtained in the beginning of a time slot can only be used to decide about the packet transmissions from the BS and relay during that time slot. Consequently, it is needed to have a subslot for the BS transmission and another one for the relay transmission, in conventional relaying. Note that alternatively, one might refer to subslots as slots, in which case the channel conditions remain constant over two time slots and the BS transmissions happen in the odd-numbered slots and the relay transmissions happen in even-numbered slots. Now consider the system where the relay has a buffer, but as before, the BS can only transmit in the first subslot and the relay can transmit in the second subslot. We note that if the channel conditions are as BB in time slot x, similar to the system with conventional relaying, no transmission will be scheduled and therefore, there will be an increase of one slot in the overall waiting time of the packets present in time slot x or arriving afterward. However, for the channel conditions as GB and BG, the case is different. In order to clearly investigate these states, first we consider the following example: In time slot t=1, the channel conditions are as GB. Therefore, in the first subslot, the BS transmission is scheduled and packet 1 will be transmitted from the BS to relay; but due to the "Bad" channel condition of relay, it will not be transmitted to the user in the second subslot and will be stored in the relay buffer. In time slot t=2, the channel conditions are as BG. Therefore, in the first subslot, there will not be any transmission from the BS to relay and the overall waiting time of the packets 2,…,N will be increased by one slot. However, due to good condition of the relay channel, packet 1 will be transmitted from the buffer of the relay to the user in the second subslot. In the above example, it is observed that packet 1 is served by the relay in time slot t=2 and therefore, it is delivered to the user in time slot t=3. This has become possible due to the queueing of that packet in the relay buffer. Note that with conventional relaying, however, in the above example, packet 1 would remain in the BS queue in both time slots t=1 and t=2, and the overall waiting time would increase by two slots for all the packets. Based on the above discussion and considering the nonzero probability of having channel conditions as GB and BG in two consecutive time slots, it can be concluded that u b <u nb , where u b is the interruption probability of the overall server in the buffer-aided relaying system. In other words, the buffering capability in relay reduces the interruption probability of the overall server and consequently, it reduces the overall waiting time for the data packets. This is achieved due to the fact that the queue size in the BS is reduced, and the data packets transferred to the relay buffer enable the efficient use of the relay channel. Relaying systems with Bernoulli data arrivals and channel conditions Now, we consider the relaying networks where both data arrivals and channel conditions follow Bernoulli distribution. We assume that in each time slot, the probability of one packet arrival at the BS buffer is a, and, as before, the probability of "Good" channel condition for the BS and relay is equal to s 1 and s 2, respectively. It is assumed that a<s 1 s 2 and therefore, the system queues are stable in the case of conventional and buffer-aided relaying [19, Chapter 2]. In the following, when we use subscripts b and nb for the variables, we refer to them in the case with buffering and without buffering at the relay, respectively. Buffer-aided relaying system Based on [20, Section 7.5], Fig. 3 shows the Markov Chain model for the queue dynamics at the BS buffer for the buffer-aided relaying network, where each state represents the number of packets in the queue. Let p n , n∈{0,1,⋯ } denote the probability that in steady state, there are n packets in the BS queue. Note that due to equilibrium in the steady state, we have: $$\begin{array}{@{}rcl@{}} p_{0} &=& \left[1-a(1-s_{1})\right]p_{0} + s_{1}(1-a)p_{1}, \\ p_{n} &=& {a}(1-s_{1})p_{n-1} + \left[1-\{a(1-s_{1})+s_{1}(1-a)\}\right]p_{n} \\&&+\, s_{1}(1-a)p_{n+1}, n=1,2,\cdots\\ \end{array} $$ Markov chain for the number of packets in the BS buffer in buffer-aided relaying system Based on the above equations, the probability of each state can be written as $$ p_{n} = \rho^{n} p_{0},~n=0,1,2,\cdots, $$ $$ \rho=\frac{a(1-s_{1})}{s_{1}(1-a)}. $$ Considering the fact that \(\sum _{n=0}^{\infty }p_{n}=1\), we have: $$ p_{0}=1-\rho. $$ Therefore, the expected number of packets in the BS queue can be expressed by $$ E\left({Q^{B}_{b}}\right) = \sum_{n=0}^{\infty}np_{n} =\frac{\rho}{1-\rho}= \frac{a(1-s_{1})}{s_{1}-a}. $$ Note that when a new packet arrives at the BS buffer, its expected delay until completion of its service by the BS can be split into two parts. The first part is the expected time that it has to wait until the packets already in the queue are served, i.e., \(E\left ({Q^{B}_{b}}\right)E\left ({T^{B}_{b}}\right)\), where \(E\left ({T^{B}_{b}}\right)\) is the expected delay imposed due the service of each packet when it is in the head of queue. The second part is the expected time since the packet itself gets to the head of the queue until its service is completed, which is denoted as \(E\left (T^{B*}_{b}\right)\). Therefore, the expected waiting time of a packet in the BS, \(E\left ({D^{B}_{b}}\right)\), can be written as \(E\left ({D^{B}_{b}}\right) = E\left ({Q^{B}_{b}}\right)E\left ({T^{B}_{b}}\right) + E\left (T^{B*}_{b}\right)\). This is in fact the well known mean value approach which holds for queueing systems with memoryless data arrival processes [21, Section 4.3]. The interpretation of \(E\left ({T^{B}_{b}}\right)\) is as follows. The delay caused due the service of a packet in the head of the queue is 1 slot with the probability of s 1 (this is in the case that the BS channel is good at the time that the packet gets to the head of queue). It is (1+1) slots with the probability of (1−s 1)s 1, (2+1) slots with the probability of (1−s 1)2 s 1, (k+1) slots with the probability of (1−s 1)k s 1, and so on. Therefore, the expected delay caused due to the service of a packet in the head of queue is given by $${} \begin{aligned} E\left({T^{B}_{b}}\right) &= s_{1} + (1+1)(1-s_{1})s_{1} + (2+1)(1-s_{1})^{2}s_{1} + \cdots \\ &= \sum_{k=0}^{\infty}(1-s_{1})^{k}s_{1}.(k+1) \\ &= s_{1}\sum_{k=0}^{\infty}(1-s_{1})^{k}k + s_{1}\sum_{k=0}^{\infty}(1-s_{1})^{k} \\ &= s_{1}(1-s_{1})\left[\frac{d}{ds_{1}}\left(-\sum_{k=0}^{\infty}(1-s_{1})^{k}\right)\right]\\&\quad+\, s_{1}\frac{1}{1-(1-s_{1})} \\ &= -\,s_{1}(1-s_{1})\frac{d}{ds_{1}}\frac{1}{s_{1}} + s_{1}\frac{1}{s_{1}} \\ &= \frac{1-s_{1}}{s_{1}} + 1 \\ &= \frac{1}{s_{1}} \end{aligned} $$ On the other hand, we can compute \(E(T^{B*}_{b})\) as follows. Considering that the packet has reached the head of the queue, its delay until the departure from the BS is equal to 0.5 with the probability of s 1, (1+0.5) with the probability of (1−s 1)s 1, (2+0.5) with the probability of (1−s 1)2 s 1, (k+0.5) with the probability of (1−s 1)k s 1, and so on. Hence, the expected waiting time of the packet after reaching the head of the queue is $$\begin{array}{@{}rcl@{}} E\left(T^{B*}_{b}\right) &=& 0.5s_{1} + (1+0.5)(1-s_{1})s_{1}\\ &&+\,(2+0.5)(1-s_{1})^{2}s_{1} + \cdots \\ &=& \sum_{k=0}^{\infty}(1-s_{1})^{k}s_{1}.(k+0.5) \\ &=& s_{1}\sum_{k=0}^{\infty}(1-s_{1})^{k}k + 0.5s_{1}\sum_{k=0}^{\infty}(1-s_{1})^{k} \\ &=& \frac{1-s_{1}}{s_{1}} + 0.5. \end{array} $$ Based on the above discussions, the expected total delay of a packet in the BS is equal to $$\begin{array}{@{}rcl@{}} E\left({D^{B}_{b}}\right) &=& E\left({Q^{B}_{b}}\right)E\left({T^{B}_{b}}\right) + E\left(T^{B*}_{b}\right)\\ &=& \frac{a(1-s_{1})}{s_{1} - a}\frac{1}{s_{1}} + \frac{1-s_{1}}{s_{1}} + 0.5 \\ &=& \frac{1}{s_{1}}\left[\frac{a(1-s_{1})}{s_{1} - a} + 1\right] -0.5 \\ &=& \frac{1-a}{s_{1}-a} - 0.5. \end{array} $$ We note that in each time slot, either one or zero packet departs the BS. Therefore, the packet departures from the BS can be modeled as a Bernoulli process. Due to the stability of the queues, the data departure rate from the BS is equal to the data arrival rate in its buffer. Consequently, the probability that one packet departs the BS, or, equivalently, the probability that one packet arrives at the relay buffer, is equal to a. As a result, the average delay that a packet experiences in the relay can be computed in the similar manner as the average delay in the BS, which is expressed by $$ E\left({D^{R}_{b}}\right) = \frac{1-a}{s_{2}-a} - 0.5. $$ Based on (8) and (9), the average waiting time of a packet in the buffer-aided relaying system is given by $${} E(D_{b}) = E\left({D^{B}_{b}}\right) + E\left({D^{R}_{b}}\right) = \frac{1-a}{s_{1}-a} + \frac{1-a}{s_{2}-a} - 1. $$ Conventional relaying system Note that in the conventional relaying system, the BS can serve the packets in its buffer only when both its own channel and the relay channel are in good condition. Hence, the service probability for serving the BS buffer is s 1 s 2. Considering that, the average number of packets in the BS buffer can be obtained by replacing s 1 in (5) with s 1 s 2, i.e., \(E\left (Q^{B}_{\textit {nb}}\right)=\frac {a(1-s_{1}s_{2})}{s_{1}s_{2}-a}\). Similarly, the average delay caused for a packet due to the service of each packet in front of it can be computed based on (6) and by using s 1 s 2 instead of s 1, i.e., \(E\left (T^{B}_{\textit {nb}}\right)=\frac {1}{s_{1}s_{2}}\). Also, the average delay that a packet experiences when it gets to the head of the queue can be obtained, based on (7), as \(E\left (T^{B*}_{\textit {nb}}\right)=\frac {1-s_{1}s_{2}}{s_{1}s_{2}} + 0.5\). Therefore, we have: $$ E\left(D^{B}_{nb}\right) = E\left(Q^{B}_{nb}\right)E\left(T^{B}_{nb}\right) + E\left(T^{B*}_{nb}\right)= \frac{1-a}{s_{1}s_{2}-a} - 0.5. $$ On the other hand, when a packet arrives at the relay, it is immediately served without waiting in any buffer. Therefore, it only spends 0.5 of a slot in the relay, which is due to the service time in the relay. Consequently, we have: $$ E\left(D^{R}_{nb}\right) = 0.5. $$ Based on (11) and (12), the average waiting time of a packet in the conventional relaying system is given by $$ E\left(D_{nb}\right) = E\left(D^{B}_{nb}\right)+ E\left(D^{R}_{nb}\right)= \frac{1-a}{s_{1}s_{2} - a}. $$ In order to compare the delay performance of the conventional and buffer-aided relaying systems, Theorem 1 states and proves the main result of this subsection. Theorem 1. Consider a relaying network where the data arrival process at the BS and the channel availability processes follow Bernoulli distribution and the packet arrival probability satisfies stability condition a<s 1 s 2. Then, the average end-to-end packet delay in the buffer-aided relaying system is less than or equal to that in the conventional one. In other words, we have: $$ E(D_{b})\leq E(D_{nb}), $$ where the equality holds only in the case that the channels are always in "Good" condition, i.e., s 1=s 2=1. Please refer to the Appendix. General relaying system Now we consider a general scenario, where the data arrival and channel condition processes follow general distributions, and are stationary and ergodic. We assume that the data arrivals and transmission rates have finite mean and variance. We use r br (t), r rd (t), and r bd (t) to show the achievable transmission rate in time slot t between the BS and relay, the relay and destination, and the BS and destination, respectively. Without buffering, the BS needs to transmit to the relay in the first subslot, and then the relay has to forward it immediately in the next subslot. We know that in this case, the end-to-end achievable rate between the BS and the user is \(r_{\textit {bd}}(t)\!\,=\,\frac {1}{2} \min \{r_{\textit {br}}(t), r_{\textit {rd}}(t)\}\). Therefore, the scheduler in the relay notifies the BS in the beginning of each time slot to transmit with a rate that can be supported by both of the links to lead to a successful reception at the destination. Due to this, the transmission rate in each slot is limited by the link with the worst channel condition in that time slot. However, when the relay has a buffer, there is no necessity for the immediate forwarding of the data and the abovementioned limitation is relaxed; therefore, the BS has the opportunity for transmitting continuously to the relay when the channel condition from the BS to relay is good. Then, the relay can store them in the buffer to transmit when the channel from the relay to user is good. Because of this, the buffering makes it possible to improve the system throughput as shown in [8–11]. Improvement in the throughput is equivalent to the improvement in the average end-to-end service rate of the data arrived at the BS buffer. In other words, the increase in the system throughput means that more data is transferred from the BS to the user, or equivalently, the same data is transferred from the BS to the user in a less amount of time. Therefore, on average, packets experience lower end-to-end delay, i.e., the delay since their arrival at the BS until delivery to the destination. Based on the above discussion, we make the conclusion as follows. Although buffer-aided relaying results in queueing delay in the relay, it also facilitates data transfer from the BS to the user and leads to a large reduction in the queueing delay at the BS. Therefore, the overall effect is the improvement of the average end-to-end packet delay. In summary, we state this as follows. Proposition. Using buffer in the relay improves the system throughput, and therefore, it reduces the average end-to-end packet delay. □ We note that the given proposition is about the average end-to-end packet delay. There might be some packets that experience larger end-to-end delays in buffer-aided relaying compared with the conventional relaying. However, reduction in the average end-to-end packet delay indicates that most of the packets experience less delay in the case of buffer-aided relaying compared with conventional relaying. This is confirmed in the next section, in Figs. 11 and 16. Moreover, we note that the above discussions do not explain anything about the maximum and minimum possible end-to-end packet delays in buffer-aided relays. In general, considering the queueing dynamics in both the BS and relay, the maximum possible end-to-end packet delay in both conventional and buffer-aided relaying is infinite, which is due to the infinite buffer size of the BS and relay. However, simulation results presented in the next section indicate that usually the maximum end-to-end packet delay is less in buffer-aided relying compared with the conventional relaying. On the other hand, the minimum possible end-to-end packet delay in both of the relaying systems is one time slot, which happens when there is no queue neither in the BS buffer nor in the relay buffer; in such a case, when a packet arrives at the BS, it can be immediately transmitted to the relay and then, the relay can immediately transmit it to the destination. This will take totally two subslots or equivalently one time slot. In the previous subsection, even in the buffer-aided relaying system, we assumed that the BS and relay transmissions are a priori scheduled to be done in the first and second subslots. In general, when buffering is exploited in the relay, each subslot can be used dynamically for the BS transmission or relay transmission, if there are data in their buffers. In this regard, a dynamic scheduling policy is required to stabilize the system queues. Specifically, in each subslot, this policy should decide on allocating the channel to the BS or relay such that the system queues remain bounded. For this, the well-known Max-Weight (MW) algorithm can be used, which has the largest stability region [19, 22, 23]. MW aims at maximizing the weighted rates of the links, where the weight of a link is considered equal to the difference of the queue sizes at the two ends of the link. Note that due to the interdependence of the queue sizes and the scheduling decision in MW, it is highly intractable to derive the expressions for average queue sizes and delays under the MW policy. However, if the data arrival rate is inside the stability region (so it can be supported by the network capacity), it is guaranteed that scheduling the links by MW policy will result in bounded average queue sizes and delays [19, 22, 23]. MW is an attractive scheduling policy for stabilizing the queues in buffer-aided relay networks as it works by utilizing just the instantaneous queue and channel state information (QCSI) and does not require information about the probability distribution of packet arrival processes and channel states. Considering the abovementioned, we summarize the costs of buffer-aided relaying in the following remark. Note that the costs for the improvements brought by buffer-aided relaying are the requirement for a memory to buffer data at the relay and the necessity for a scheduling algorithm to keep the queues stable. It is worth noting that the proposition stated above can also be considered for the case of relay networks with more than two hops or more than one relays. For successful data transmission with conventional relaying in a multihop network, it is needed to have the channel states of all the hops from the source to destination favorable during the transmission interval. However, with buffer-aided relaying, it is possible to use the channels more opportunistically. This is studied in [24], where it is shown that the outage (or unsuccessful packet reception) is reduced in buffer-aided multihop networks, which is equivalent to improvement in throughput. Similarly, it is shown in [25] that in a network with multiple relays, the system throughput is improved in buffer-aided relaying compared with the case without buffers at the relays. Therefore, based on those results and considering the queueing delays both in the source and the relay, we conclude that the packets that are successfully received at the destination experience lower average end-to-end delay in the aforementioned relaying systems when buffering used in relays compared with the case without buffers in relays. The analysis for deriving the exact expressions of average end-to-end packet delay in these scenarios needs more investigation, as the effect of the relay selection policy should also be taken into account, and is an interesting research topic for future works. Numerical results To verify the presented discussions, we have conducted extensive Matlab simulations over 10,000 time slots and more. We have investigated the cases that the data arrival and channel condition processes follow Bernoulli distribution, as well as general cases with the settings of a practical system. We present the simulation results in the following. Bernoulli data arrivals and channel conditions In order to validate the analysis provided in Subsection 3.2, in Figs. 4, 5, and 6, we present the average packet delay obtained from both the analytical expressions and the simulation. In each of these figures, we have fixed the values of s 1 and s 2 and have evaluated the effect of increase in a on the average end-to-end packet delay. In order to maintain the stability of the system queues with both conventional and buffer-aided relaying, for each figure, we have considered a<s 1 s 2. Figure 4 displays the case with high probability for the good channel conditions at the BS and relay, i.e., s 1=s 2=0.9. It is clear that the analytical results are quite close to the simulation ones. Moreover, the results confirm that the buffer-aided relaying has lower packet delays compared with the conventional relaying. As expected, both of the systems incur larger delay as the packet arrival probability increases. However, the delay in the conventional relaying system increases faster comparing with that in the buffer-aided relaying system. Average end-to-end packet delay in the case of Bernoulli channel distribution with s 1=s 2=0.9 Average end-to-end packet delay in the case of Bernoulli channel distribution with s 1=0.5,s 2=0.9 Furthermore, Figs. 5 and 6 show the results for the cases that either one or both of the channels have relatively lower probability of being in good condition. It is observed that in these cases, the conventional relaying results in significantly higher delays even at the lower data arrival rates. In particular, the performance difference of these relaying systems is larger in Fig. 5 compared with Fig. 4 and the largest in Fig. 6. This is because when the probability of good channel conditions is low, in the case of conventional relaying, the BS has to wait for a long time before having both the channels favorable for transmission. However, in the case of buffer-aided relaying, the BS can transmit to the relay even when the relay channel is bad. Then, the relay can buffer the received data and transmit in its subslots whenever its channel is good. We have also conducted simulations to investigate the average total queue sizes when the packet arrival probabilities get close to the stability region boundaries, in the case of s 1=0.5,s 2=0.5. We note that based on [19, Chapter 2], the stability region boundary in conventional relaying is equal to s 1 s 2=0.25 whereas it is equal to min[s 1,s 2]=0.5 in buffer-aided relaying. Also, note that in conventional relaying, the total queue size is the BS queue size whereas in buffer-aided relaying, the total queue size is the sum of the BS queue size and relay queue size. Therefore, based on (5) and the discussions in Subsection 3.2, the mathematical expression of average total queue size is \(\frac {a(1-s_{1}s_{2})}{s_{1}s_{2}-a}\) in conventional relaying and \(\frac {a(1-s_{1})}{s_{1}-a}+\frac {a(1-s_{2})}{s_{2}-a}\) in buffer-aided relaying. The graphs for these equations as well as the results of simulations are shown in Figs. 7 and 8. It is observed that the analytical results are close to the simulation ones. Furthermore, Fig. 7 shows that the average total queue size in conventional relaying system increases rapidly when the packet arrival probability gets close to 0.25 (the stability region boundary for conventional relaying). This is due to the fact that the probability of having both the channel conditions "Good" (to be able to serve the packets in the BS queue in conventional relaying) is 0.25. When the data arrival probability gets close to this value, more packets have to wait in the queue until they get to the head of buffer and get the chance to be transmitted. On the other hand, it is observed in Fig. 8 that the similar effect happens in buffer-aided relaying in considerably larger packet arrival probability, i.e., 0.5 (the stability region boundary for buffer-aided relaying), and before that, the average total queue size is small. This means that in the arrival probabilities larger than 0.25 in conventional relaying, when a packet arrives at the BS, it is expected to encounter an infinite queue size (and end-to-end delay) in its path to the destination. However, even though in buffer-aided relaying, there are two buffers in the path of packets from the BS to the destination, it is expected that the packets will encounter a finite total queue size (and end-to-end delay) before reaching the destination, as long as their arrival probability is less than 0.5. Average total queue size in conventional relaying; the case of Bernoulli channel distribution with s 1=s 2=0.5 and the packet arrival probability close to the stability region boundary Average total queue size in buffer-aided relaying; the case of Bernoulli channel distribution with s 1=s 2=0.5 and the packet arrival probability close to the stability region boundary General scenario Note that the mathematical analysis presented in Subsection 3.2 and the numerical results shown in Subsection 4.1 are for Bernoulli data arrivals and channel conditions and provide an insight on the effect of using a buffer in relay on the end-to-end packet delay. In order to verify the discussions presented in Subsection 3.3 for general data arrival and channel condition processes, we consider a scenario with more realistic settings. For this scenario, the simulation parameters are shown in Table 2. It is assumed that the channel fading is flat over the system bandwidth and constant during each time slot; however, it can vary from one slot to another. For the link between the relay and user, Rayleigh channel model is used, and for the link from the BS to relay, Rician channel model is used with κ factor equal to 6 dB [26]. In the case of conventional relaying, the transmissions at the BS and relay are done in consecutive subslots. For buffer-aided relaying, we have used MW policy [19, 22, 23] to decide in an adaptive way, about the transmission in each subslot, either from the BS or from the relay buffer. The simulations were conducted for 100 independent realizations of channel condition and data arrival processes, each over 10,000 time slots. Table 2 Simulation parameters Figures 9 and 10 show the BS and relay average queue sizes over time, respectively, at the arrival rate of 50 packets/second. The average queue size is obtained by taking the average of queue sizes over 100 simulations. It is observed that with buffer-aided relaying, although data are queued in the relay, the average BS queue size in each time slot is reduced significantly. This results in lower average end-to-end packet delays in buffer-aided relaying compared with the conventional relaying, as shown in Fig. 11. In particular, in this scenario, the average end-to-end packet delays are 11 ms and 30 ms in buffer-aided and conventional relaying, respectively. Note that Fig. 11 indicates that in general, the average end-to-end packet delay is less in buffer-aided relaying. In other words, even though some packets might experience larger overall waiting time compared with the conventional relaying, most of the packets experience lower delay since their arrival at the BS until delivery to the destination. Moreover, it is observed that the maximum end-to-end packet delay is less in the case of buffer-aided relaying. Average BS queue size over time at the arrival rate of 50 packets/second Average relay queue size over time at the arrival rate of 50 packets/second CDF of end-to-end packet delays at the arrival rate of 50 packets/second Next, we investigate the effect of increase in the packet arrival rate on the throughput and delay performance. It is observed in Fig. 12 that the conventional relaying is able to support data arrival rates up to 60 packets/second, in which range it results in the average throughput equal to data arrival rate at the BS. However after that, due to low capacity, it starts to get saturated. This leads to queue instability and large end-to-end delays for packets, as shown in Fig. 13. In contrast, the buffer-aided relaying is able to provide the average throughput equal to the data arrival rate, in all the packet arrival rates, and therefore leads to very low end-to-end packet delays. Effect of packet arrival rate at the BS on the average throughput in each time slot Effect of packet arrival rate at the BS on the average end-to-end packet delay In order to have a complete picture, we also present the system performance in the arrival rate of 100 packets/second, in Figs. 14 and 15. Figure 14 shows that in conventional relaying, the average BS queue size grows unbounded; this is due to the low capacity of relaying channel which is unable to serve all the arrived data. This leads to large end-to-end packet delays as depicted in Fig. 16. On the other hand, as shown in Fig. 15, buffer-aided relaying leads to queueing in the relay buffer, which helps to utilize the channel variations efficiently. It allows to transfer the data from the BS buffer to relay buffer and from relay buffer to user, when the corresponding channels have good conditions, and therefore leads to low end-to-end packet delays. In particular, in this scenario, the average end-to-end packet delays are 20 ms and 1355 ms, respectively, in buffer-aided and conventional relaying. Average BS queue size over time at the arrival rate of 100 packets/second Average relay queue size over time at the arrival rate of 100 packets/second CDF of end-to-end packet delays at the arrival rate of 100 packets/second The above results confirm that using buffer in relay improves the throughput as well as the average end-to-end packet delay in the system. In this paper, we have studied the effect of buffering at the relay on the end-to-end delay performance. Through the discussions about queueing delay, we have explained the cause of delay in a simple queueing system. Based on that, we have provided an insight on the overall delay in the conventional and buffer-aided relaying networks. Moreover, for the case of Bernoulli data arrivals and channel conditions, we have proved analytically that the average packet delay is lower in buffer-aided relaying system compared with the conventional one. Finally, based on intuitive reasoning for general scenarios, we have concluded that employing buffer in the relay improves both the system's throughput and average end-to-end packet delay. Using numerical results, we have verified our analysis and discussions, and shown that using buffer in the relay leads to higher system throughput and lower average end-to-end packet delay. In order to prove that the buffer-aided relaying system incurs equal or lower delay compared with the conventional one, it is required to prove E(D nb )−E(D b )≥0. To show this, note that $${} E(D_{nb}) - E(D_{b}) = \frac{1-a}{s_{1}s_{2}-a} - \frac{1-a}{s_{1}-a} - \frac{1-a}{s_{2}-a} + 1 $$ By adding and subtracting the term \(\frac {1-a}{s_{1}-a}\frac {1-a}{s_{2}-a}\) and rearranging the equations, we have $$\begin{array}{@{}rcl@{}} E(D_{nb}) - E(D_{b}) &=& \frac{1-a}{s_{1}-a}\frac{1-a}{s_{2}-a} - \frac{1-a}{s_{1}-a} - \frac{1-a}{s_{2}-a} \\ &&+\, 1 + \frac{1-a}{s_{1}s_{2}-a} - \frac{1-a}{s_{1}-a}\frac{1-a}{s_{2}-a} \\ &=&\left(\frac{1-a}{s_{1}-a} - 1\right)\left(\frac{1-a}{s_{2} - a} - 1\right) \\&&+\, \frac{1-a}{s_{1}s_{2}-a} - \frac{1-a}{s_{1}-a}\frac{1-a}{s_{2}-a}. \end{array} $$ Note that the packet arrival probability is nonzero and the stability condition holds, i.e., 0<a<s 1 s 2. Since s i ≤1, i=1,2, we have 0<a<s i , i=1,2, and \(\frac {1-a}{s_{i}-a} \ge 1\), i=1,2. Therefore, the first term in the right hand side of (16) is non-negative. Hence, it suffices to show $$ \frac{1-a}{s_{1}s_{2}-a} \ge \frac{1-a}{s_{1}-a}\frac{1-a}{s_{2}-a}. $$ By canceling 1−a and cross-multiplying in (17), we obtain $$ \left(s_{1}-a\right)\left(s_{2}-a\right) \ge \left(1-a\right)\left(s_{1}s_{2}-a\right). $$ After multiplying both sides out and canceling the common terms of (18), we have $$ a\left(1-s_{1}\right)\left(1-s_{2}\right) \ge 0, $$ which is always true since s 1≤1 and s 2≤1. A Host-Madsen, J Zhang, Capacity bounds and power allocation for wireless relay channels. IEEE Trans. Inf. Theory. 51(6), 2020–2040 (2005). K Azarian, H El Gamal, P Schniter, On the achievable diversity-multiplexing tradeoff in half-duplex cooperative channels. IEEE Trans. Inf. Theory. 51(12), 4152–4172 (2005). C Hoymann, W Chen, J Montojo, A Golitschek, C Koutsimanis, X Shen, Relaying operation in 3GPP LTE: challenges and solutions. IEEE Commun. Mag. 50(2), 156–162 (2012). D Ng, R Schober, Cross-layer scheduling for OFDMA amplify-and-forward relay networks. IEEE Trans. Veh. Technol. 59(3), 1443–1458 (2010). Z Chang, T Ristaniemi, Z Niu, Radio resource allocation for collaborative OFDMA relay networks with imperfect channel state information. IEEE Trans. Wirel. Commun. 13(5), 2824–2835 (2014). D Zhang, Y Wang, J Lu, Qos aware relay selection and subcarrier allocation in cooperative OFDMA systems. IEEE Commun. Lett. 14:, 294–296 (2010). D Zhang, X Tao, J Lu, M Wang, Dynamic resource allocation for real-time services in cooperative OFDMA systems. IEEE Commun. Lett. 15:, 497–499 (2011). B Xia, Y Fan, J Thompson, H Poor, Buffering in a three-node relay network. IEEE Trans. Wirel. Commun. 7:, 4492–4496 (2008). N Mehta, V Sharma, G Bansal, Performance analysis of a cooperative system with rateless codes and buffered relays. IEEE Trans. Wirel. Commun. 10:, 2816–2840 (2011). N Zlatanov, R Schober, Buffer-aided relaying with adaptive link selection-fixed and mixed rate transmission. IEEE Trans. Inf. Theory. 59:, 2816–2840 (2013). N Zlatanov, A Ikhlef, T Islam, R Schober, Buffer-aided cooperative communications: opportunities and challenges. IEEE Commun. Mag. 52:, 146–153 (2014). I Krikidis, T Charalambous, J Thompson, Buffer-aided relay selection for cooperative diversity systems without delay constraints. IEEE Trans. Wirel. Commun. 11(5), 1957–1967 (2012). T Islam, A Ikhlef, R Schober, V Bhargava, in Proc. IEEE Global Telecommun. Conf.Multisource buffer-aided relay networks: Adaptive rate transmission, (2013), pp. 3577–3582. I Ahmed, A Ikhlef, R Schober, R Mallik, Power allocation for conventional and buffer-aided link adaptive relaying systems with energy harvesting nodes. IEEE Trans. Wirel. Commun. 13(3), 1182–1195 (2014). H Liu, P Popovski, E de Carvalho, Y Zhao, Sum-rate optimization in a two-way relay network with buffering. IEEE Commun. Lett. 17(1), 95–98 (2013). M Darabi, V Jamali, B Maham, R Schober, Adaptive link selection for cognitive buffer-aided relay networks. IEEE Commun. Lett. 19(4), 693–696 (2015). J Yang, Y Ran, S Chen, W Li, L Hanzo, Online source rate control for adaptive video streaming over HSPA and LTE-style variable bitrate downlink channels. To appear in IEEE Trans. Veh. Technol, 1–1 (2015). R Zhu, J Yang, Buffer-aware adaptive resource allocation scheme in LTE transmission systems. EURASIP J. Wirel. Commun. Netw, 1–16 (2015). M Neely, Stochastic Network Optimization with Application to Communication and Queueing Systems (Morgan & Claypool, San Rafael, 2010). F Gebali, Analysis of Computer and Communication Networks (Springer, New York, 2008). I Adan, J Resing, Queueing Systems. Online-Available: http://www.win.tue.nl/~iadan/queueing.pdf, Eindhoven University Netherlands (2015). M Neely, E Modiano, C Rohrs, Dynamic power allocation and routing for time varying wireless networks. IEEE J. Sel. Areas Commun., Special Issue on Wireless Ad-hoc Networks. 23(1), 89–103 (2005). L Georgiadis, M Neely, L Tassiulas, Resource allocation and cross-layer control in wireless networks. Foundation and Trends in Networking. 1(1), 1–144 (2006). C Dong, L Yang, L Hanzo, Performance analysis of multihop-diversity-aided multihop links. IEEE Trans. Veh. Technol. 61(6), 2504–2516 (2012). A Ikhlef, DS Michalopoulos, R Schober, Max-max relay selection for relays with buffers. IEEE Trans. Wirel. Commun. 11(3), 1124–1135 (2012). M Jeruchim, P Balaban, K Shanmugan, Simulation of Communication Systems: Modeling, Methodology and Techniques, 2nd edn (Kluwer Academic, Dordrecht, 2000). Tech. Rep 3GPP TR 25.996 V7.0.0 (2007-06), Spatial channel model for multiple input multiple output (MIMO) simulations. available at: http://www.3gpp.org/DynaReport/25996.htm. This work was supported by the Canadian Natural Sciences and Engineering Research Council through grants RGPIN-2014-06119 and RGPAS-462031-2014, and the National Natural Science Foundation of China through Grant No. 61271182. The work of Amr Mohamed was supported by NPRP 5-782-2-322 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors. ECE Department, The University of British Columbia, Vancouver, Canada Javad Hajipour , Rukhsana Ruby & Victor C. M. Leung CSE Department, Qatar University, Doha, Qatar Amr Mohamed Search for Javad Hajipour in: Search for Rukhsana Ruby in: Search for Amr Mohamed in: Search for Victor C. M. Leung in: Correspondence to Javad Hajipour. Hajipour, J., Ruby, R., Mohamed, A. et al. Buffer-aided relaying improves both throughput and end-to-end delay. J Wireless Com Network 2015, 261 (2015) doi:10.1186/s13638-015-0482-3 Wireless relay networks Buffering capability
CommonCrawl
Peritoneal Dialysis Guidelines 2019 Part 1 (Position paper of the Japanese Society for Dialysis Therapy) Yasuhiko Ito ORCID: orcid.org/0000-0002-9676-69611,2, Munekazu Ryuzaki1,3, Hitoshi Sugiyama1,4, Tadashi Tomo1,5, Akihiro C. Yamashita1,6, Yuichi Ishikawa1,7, Atsushi Ueda1,8, Yoshie Kanazawa1,9, Yoshihiko Kanno1,10, Noritomo Itami1,11, Minoru Ito1,12, Hideki Kawanishi1,13, Masaaki Nakayama1,14, Kazuhiko Tsuruya1,15, Hideki Yokoi1,16, Mizuya Fukasawa1,17, Hiroyuki Terawaki1,18, Kei Nishiyama1,19, Hiroshi Hataya1,20, Kenichiro Miura1,21, Riku Hamada1,22, Hyogo Nakakura1,23, Motoshi Hattori1,21, Hidemichi Yuasa1,24 & Hidetomo Nakamoto1,25 Approximately 10 years have passed since the Peritoneal Dialysis Guidelines were formulated in 2009. Much evidence has been reported during the succeeding years, which were not taken into consideration in the previous guidelines, e.g., the next peritoneal dialysis PD trial of encapsulating peritoneal sclerosis (EPS) in Japan, the significance of angiotensin-converting enzyme inhibitors (ACEIs) and angiotensin receptor blockers (ARBs), the effects of icodextrin solution, new developments in peritoneal pathology, and a new international recommendation on a proposal for exit-site management. It is essential to incorporate these new developments into the new clinical practice guidelines. Meanwhile, the process of creating such guidelines has changed dramatically worldwide and differs from the process of creating what were "clinical practice guides." For this revision, we not only conducted systematic reviews using global standard methods but also decided to adopt a two-part structure to create a reference tool, which could be used widely by the society's members attending a variety of patients. Through a working group consensus, it was decided that Part 1 would present conventional descriptions and Part 2 would pose clinical questions (CQs) in a systematic review format. Thus, Part 1 vastly covers PD that would satisfy the requirements of the members of the Japanese Society for Dialysis Therapy (JSDT). This article is the duplicated publication from the Japanese version of the guidelines and has been reproduced with permission from the JSDT. Chapter 1 Introduction Peritoneal dialysis (PD) should be performed for a patient after sufficient information regarding hemodialysis (HD), PD, and kidney transplantation has been provided and consent has been obtained from the patient. The same information should be provided to all patients (Note 1). Appropriate patient education and planned initiation should be undertaken in order to obtain the benefits of PD. Information relating to renal replacement therapy should be provided in patients with stage 4 (glomerular filtration rate [GFR] below 30.0 mL/min/1.73 m2 or above 15.0 mL/min/1.73 m2) chronic kidney disease [CKD]) accompanied by chronic decreased renal function (Note 2). Dialysis initiation should be considered in patients with stage 5 CKD (GFR below 15.0 mL/min/1.73 m2) and who display clinical symptoms that are resistant to conservative treatment (Note 3). Dialysis initiation should be considered in patients with GFR below 6.0 mL/min/1.73 m2 (Note 4). The four points mentioned above were written as the initiation criteria in the Peritoneal Dialysis Guidelines published in 2009 by the Japanese Society for Dialysis Therapy [1]. These same points will also be discussed here. Note 1: A questionnaire survey based on the "Peritoneal Dialysis Guidelines" published in 2009 by the Japanese Society for Dialysis Therapy was conducted in 2011 [2]. Results showed that prior to initiation, 64% of patients received all the necessary information on HD, PD, and kidney transplantation; 23% were selectively provided information; and 13% were not provided with all the relevant information. These results showed that patients were informed to varying degrees and that bias existed. Note 2: A similar recommendation regarding the provision of information related to renal replacement therapy was made in the "Maintenance Hemodialysis Guidelines: Hemodialysis Initiation" (2013) (Statement 3) by the Japanese Society for Dialysis Therapy [3] and the "CKD Stage G3b-5 Clinical Guidelines 2017 (2015 Expanded Edition)" (CQ 1) by sponsored research in the Japanese Agency for Medical Research and Development [4]. Note 3: Clinical symptoms that are resistant to conservative treatment and accompany decreased renal function are as follows: fluid retention (edema, pleural effusion, ascites), malnutrition, cardiovascular symptoms (breathing difficulties, shortness of breath, cardiac insufficiency, hypertension), renal anemia, electrolyte imbalance (hypocalcemia, hyperkalemia, hyponatremia, hyperphosphatemia), acid-base imbalance (metabolic acidosis), digestive symptoms (nausea, vomiting, anorexia), and neurological symptoms (impaired consciousness, convulsions, numbness). The Japanese Society of Dialysis Therapy has stated in their "Maintenance Hemodialysis Guidelines: Hemodialysis Initiation" (2013) (Statement 6) that the initiation period should be decided when a patient's GFR is below 15.0 mL/min/1.73 m2 [3]. The estimated GFR (eGFR), which uses the serum creatinine value, age, and sex, is used for evaluating renal function during stable periods. Evaluations with eGFR are also conducted in patients with stage 4 (GFR below 30.0 mL/min/1.73 m2) and 5 (GFR below 15.0 mL/ min/1.73 m2) CKD (Supplementary Provision 1). GFR measurements using a 24-h urinalysis are conducted to the fullest extent possible for dialysis initiation (Supplementary Provision 2). Note 4: The Japanese Society of Dialysis Therapy has stated in their Maintenance Hemodialysis Guidelines: Hemodialysis Initiation (2013) (Statement 7) that with regard to the timing of dialysis initiation, a "favorable prognosis was observed following hemodialysis initiation even when renal failure symptoms are observed if follow-up observations with conservative treatment are conducted until GFR < 8 mL/min/1.73 m2. However, hemodialysis should be introduced until GFR 2 mL/min/1.73 m2 from a post-dialysis prognosis standpoint, even with the absence of renal failure symptoms" [3]. The Japanese Society of Nephrology in their "Evidence-based CKD Clinical Guidelines 2013" (Until Dialysis Treatment Initiation, CQ2) also stated that "early initiation up to around eGFR 8-14 mL/min/1.73 m2, where no uremic symptoms are observed, does not contribute to improved prognosis after the initiation of dialysis. Meanwhile, prognosis can worsen if this is not introduced up to eGFR 2 mL/min/1.73 m2 even without symptoms" [5]. Residual renal function is an important consideration in continued treatment during PD. For this reason, PD, which is expected to maintain the residual renal function, should be initiated when renal function is still present, even if the patient is asymptomatic. A questionnaire survey was conducted in 2011 using the "Peritoneal Dialysis Guidelines" published in 2009 by the Japanese Society for Dialysis Therapy. This survey confirmed that information related to renal replacement therapy was provided, explanatory material was shared and standardized, non-physician staff members (particularly nurses) were already playing a large role, and eGFR calculations were being conducted at 90% of the facilities [2]. For these reasons, the 2009 "Peritoneal Dialysis Guidelines" have been used as the standard since then. Information provision and consent acquisition Appropriate clinical information regarding the initiation of dialysis is provided directly to the patient, as well as to the patient's family, guardians, and caregivers, as needed. Information provision and consent acquisition procedures should be conducted by teams composed of physicians, nurses, social workers, and clinical engineers. Prior to the initiation of dialysis, patients should be informed about the three treatment methods for renal replacement therapy during end-stage renal failure (i.e., HD, PD, and kidney transplantation), as well as the benefits and drawbacks of each treatment, for consent acquisition. Additionally, it should be ensured that the patient has sufficient comprehension of all the treatment options available and are guided in their treatment selection (Supplementary Provision 3). Insufficient information related to end-stage renal failure treatment methods is currently being provided to patients in Japan, and information provision related to PD has a strong tendency to be limited to facilities that conduct PD [6]. Information should be equally provided, and cooperation with a dialysis facility that can conduct PD should be actively prioritized in cases where a patient, who is in a facility that cannot conduct it, requests that treatment. Evaluation of the initiatives and performance related to the promotion of PD began with the Revision of Medical Fees for Financial Year 2018. In other words, an initiation period addition is calculated based on the documents created by related associations and other referenced materials after sufficient information regarding renal replacement therapy has been provided to the patient. Other additions are calculated when an institution shows performance related to PD guidance management or initiatives for kidney transplantation promotion. Proposals on decision-making processes related to the commencement and continuance of maintenance HD were issued in 2014 in order to address end-stage dialysis non-initiation and suspension [7]. However, there were no such proposals related to PD. Conservative treatment other than renal replacement therapy may also be conducted. Pre-initiation education and planned initiations Pre-initiation education and planned initiations are conducted for PD. Reports have confirmed that PD should be introduced while residual renal function is still present so as to avoid initiation period complications and improve patient prognosis [8]. Active referrals to specialists and outpatient renal failure education are important as they help promote planned initiation of dialysis [9,10,11,12]. Patients who were referred to specialists at an early stage (roughly 6 months or more before initiation) tended to choose PD as their primary form of renal replacement therapy compared to those who were referred immediately prior to symptom onset [13, 14]. Reports have also indicated a strong correlation between pre-initiation patient education and PD selection [15]. Hospitalization due to catheter implantation and potential surgical complications are particular issues related to the planned initiation of dialysis. To address these issues, Japan uses a stepwise PD initiation method, which involves the insertion and implantation of the PD catheter prior to the onset of clinical symptoms and initiation of PD as soon as the opportunity presents itself (Stepwise initiation of PD using the Moncrief and Popovich technique, SMAP method) [16, 17]. Dialysis initiation period There is currently no comprehensive, evidence-based standard for the initiation of dialysis. This is thought to be because the pathology of end-stage renal failure patients is strongly affected by factors such as patients' primary disease, age, nutritional status, and complications. Currently, the most widely used standard for initiation of dialysis in Japan is the standard outlined by the clinical research project on renal failure by the Japanese Ministry of Health, Labour and Welfare in 1991 [18]. This standard details a comprehensive evaluation based on renal function, clinical symptoms, and the extent of daily life disabilities. The standard has been validated using patients' prognoses 2 years after the initiation of dialysis and follow-ups regarding complications [18]. Furthermore, this standard considers both the elderly and children. However, it only considers initiation of HD, and there are no analyses for PD. Meanwhile, the United States of America initiation standards are set at CKD stage 5 (GFR below 15 mL/min/1.73 m2), regardless of HD or PD [19]. Europe has also set CKD stage 5 as the standard, recommending that the initiation of dialysis and surgery preparations begin when renal failure symptoms appear and patients' nutritional status worsens [20]. Considering the current initiation standards across the world, we recommend that Japan also use GFR as the standard for evaluation methods of renal function. We also recommend that dialysis be introduced at an appropriate period to maintain and guarantee a favorable nutritional status and high quality of life (QOL) of patients [21,22,23,24]. Initiation of dialysis should not be delayed, from a patient prognosis standpoint, in cases where treatment-resistant clinical symptoms appear [25]. Examples when GFR is below 6.0 mL/min/1.73 m2 The maintenance of residual renal function [26, 27], favorable QOL, and high satisfaction [28, 29] have been indicated as medical advantages of PD (Supplementary Provision 4). Meanwhile, medical justifications for establishing initiation standards from only a renal function perspective are not always sufficient in cases where patients are appropriately managed without presenting clinical renal failure symptoms [30,31,32]. However, there is a strong correlation between decreased renal function and worsening nutritional status [33, 34]. The residual renal function after the initiation of PD has a large influence on patient prognosis [35, 36]. Therefore, cases for which initiation of PD is planned should not be left until a later stage. Initiation of dialysis should be considered when the GFR is below 6.0 mL/min/1.73 m2, even in cases where subjective and/or objective symptoms are not observed. During prospective observational studies, patient groups in which PD was introduced at an eGFR of 5.0–10.0 mL/min/1.73 m2 showed significantly more favorable prognoses than groups in which it was introduced at an eGFR below 5 mL/min/1.73 m2 and above 10 mL/min/1.73 m2 [37]. Meanwhile, randomized controlled trials (RCTs) have reported that patients who underwent planned PD initiation showed no significant differences in prognoses between early-stage (eGFR 10–14 mL/min/1.73 m2) and late-stage (eGFR 5–7 mL/min/1.73 m2) initiation groups [38]. Supplementary Provision 1 The Japanese equation is set according to the estimated equation by the Japanese Society of Nephrology [39]. This equation should not be used for evaluating pediatric renal function (please refer to "Initiation in Pediatric Patients"). Currently, there is insufficient evidence on the validity of evaluating the eGFR mentioned above during the dialysis initiation period. Evaluations are conducted with measured GFR to the fullest extent possible for the initiation of dialysis. The Japanese Society for Dialysis Therapy in their "Maintenance Hemodialysis Guidelines: Hemodialysis Initiation" (2013) (Statement 2) recommends measurement-based evaluations such as the inulin clearance test, 24-h-urinalysis-based creatinine clearance, and the sum of creatinine clearance and urea clearance (Ccr + Curea) divided by 2 for accurate evaluations of renal function during the dialysis initiation period [3]. Serum creatinine values or endogenous creatinine clearance is used in the authorization standards for renal dysfunction-related physical disability certificates, and from April 2018 onwards, eGFR is also applicable in level 3 and 4 determinations. A manual ("Selection and Conditions of Renal Failure Treatment" (2018 Edition)) was jointly issued by the Japanese Society of Nephrology, Japanese Society for Dialysis Therapy, Japan Society for Transplantation, Japanese Society for Clinical Renal Transplantation, and the Japanese Society for Peritoneal Dialysis to explain renal replacement therapy during end-stage renal failure. Given the advantages of PD, a PD-first policy has been advocated, in which PD is selected as the first choice of treatment for end-stage renal failure [40]. Given the realities in Japan, our committee has defined this concept as "a thought process that preferentially considers PD in patients with residual renal function in order to sufficiently take advantage of the benefits of PD." Residual renal function refers to renal function following the initiation of dialysis and is clinically defined as daily urine volume exceeding 100 mL. Initiation in pediatric patients Sufficient information related to PD, HD, and kidney transplantation should be provided to the patient and their family before renal replacement therapy is initiated. Additionally, sufficient time should be allowed to consider the most appropriate treatment. Referral to a specialized facility may be required. The estimated glomerular filtration rate (eGFR) for children is used to evaluate renal function, which is a criterion for initiation timing. Dialysis should be considered at an eGFR level of 10-15 mL/min/1.73 m2. The initiation timing should be decided after considering complications such as growth or psychomotor disorders. Preparation for initiation Children and their guardians should be provided with comprehensive information regarding dialysis before obtaining their informed consent. Referrals to specialized facilities are advised for adult patients with CKD-3 (eGFR 30–60 mL/min/1.73 m2), when various complications begin to appear, or at an eGFR of 30 mL/min/1.73 m2 at the latest [41]. In pediatric patients (Definition of Japanese pediatrics is under 15 years old.), a clinical team with sufficient knowledge should provide information on PD, HD, and kidney transplantation and take sufficient time to consult with children and their guardians before making treatment decisions. In cases of serious complications other than renal failure, the patient and guardians should consult with several different medical professions so that they are fully informed about the best treatment option to choose. The "Guidelines for discussion about treatment for children with serious illnesses" [42] and "Proposals for the decision-making process relating to the commencement and continuance of maintenance hemodialysis" [43] should be referred to in such cases. Irreversible complications such as those related to growth or social development should be considered when deciding the initiation timing in children. It is important for patients to consider long-term treatment options, including kidney transplantation, and to be referred to a pediatric renal disease specialist at an early stage of CKD. PD is often the preferred treatment option for children due to the many advantages it confers [44], including treatment at home, which is essential for acquiring social skills; daily dialysis, which allows nutritional and fluid uptake essential for growth; and night-time dialysis, which allows for school attendance and extracurricular activities. PD is almost exclusively used for low-weight children, with 87% of children with renal failure being started on PD under the age of 5 [45]. Evaluation methods of renal function and suitability Renal function in children is evaluated using the eGFR. In Japan, the three equations for eGFR in children include Cr [46,47,48], CysC [49], and β2MG [50]. Special care must be taken depending on the patient's age and physique (Supplement). There is no absolute consensus regarding the appropriate renal function-based initiation timing for PD in children [51, 52]. There are no CQs about the standard initiation timing in evidence-based Clinical Guidelines for CKD 2013 and 2018, and the section on children in the Maintenance Hemodialysis Guidelines [53] states that initiation of dialysis should be considered in children when the GFR is below 10 mL/min/1.73 m2, even if asymptomatic. Meanwhile, in international pediatric initiation standards, the Kidney Disease Outcomes Quality Initiative (K/DOQI) states that dialysis should be considered at an eGFR of 9–14 mL/min/1.73 m2 and started at 8 mL/min/1.73 m2 [54], with the initiation at a higher eGFR if the patient exhibits malnutrition, growth disorders, or other complications that cannot be medically controlled [54]. The European pediatric PD working group states that dialysis should be initiated at 10–15 mL/min/1.73 m2 if no symptoms are observed [55]. Generally speaking, dialysis is initiated at an earlier stage than in adult patients after any potential complications have been considered. Absolute indications and relative indications Absolute indications of dialysis are serious uremic symptoms, such as neurological complications, hypertension that cannot be controlled by antihypertensive drugs, excessive fluid-induced pulmonary edemas that do not respond to diuretics, pericarditis, hemorrhagic complications, and refractory nausea or vomiting [51]. There is no consensus on the appropriate renal function-based initiation timing in children who do not present absolute dialysis indications [51, 52]. Instead, comprehensive evaluations are conducted based on renal function and symptoms. Relative indications include mild uremic symptoms such as fatigue, reduced cognitive function, and decreased QOL in school life, as well as hyperkalemia, hyperphosphatemia, malnutrition, and growth disorders. Growth disorders are included as an important characteristic category of clinical symptoms in the decision-making standards for children. Growth and development are prioritized over renal function in infants [56]. Estimated GFR formula in Japanese children The equation includes Cr (mg/dL), CysC (mg/L), β2MG (mg/L), and height (m). Care must be taken as inaccurate values may lead with Cr eGFR, which uses height as a physique indicator, when the patient's muscle mass varies significantly from the standard physique; CysC eGFR during thyroid dysfunction or steroid treatment; and β2MG eGFR when the patient has an inflammatory disease. Cr eGFR [46,47,48] $$ {\mathrm{eGFR}}_{\mathrm{Cr}}\ \left(\mathrm{mL}/\min /1.73\ {\mathrm{m}}^2\right)=\left(110.2\times \mathrm{standard}\ \mathrm{Cr}/\mathrm{patient}\ \mathrm{Cr}+2.93\right)\times \mathrm{R} $$ *Standard Cr (mg/dL) $$ \mathrm{Male}:\mathrm{Standard}\ \mathrm{Cr}=-1.259\times {\mathrm{Height}}^5+7.815\times {\mathrm{Height}}^4-18.57\times {\mathrm{Height}}^3+21.39\times {\mathrm{Height}}^2-11.71\times \mathrm{Height}+2.628 $$ $$ \mathrm{Female}:\mathrm{Standard}\ \mathrm{Cr}=-4.536\times {\mathrm{Height}}^5+27.16\times {\mathrm{Height}}^4-63.47\times {\mathrm{Height}}^3+72.43\times {\mathrm{Height}}^2-40.06\times \mathrm{Height}+8.778 $$ *R: 2 years or older…R=1, Aged 3 months to 23 months…R = 0.107 × In (Months) + 0.656 CysC eGFR [49] $$ {\mathrm{eGFR}}_{\mathrm{Cys}}\ \left(\mathrm{mL}/\min /1.73\ {\mathrm{m}}^2\right)=104.1/\mathrm{Cys}-\mathrm{C}-7.8 $$ β2MG eGFR [50] $$ {\mathrm{eGFR}}_{\upbeta 2\mathrm{MG}`}\ \left(\mathrm{mL}/\min /1.73\ {\mathrm{m}}^2\right)=149.0\times 1/\upbeta 2\mathrm{MG}+9.15 $$ Chapter 2 Optimal dialysis There is no clear definition established for optimal PD. A total Kt/V ≥ 1.7, including both peritoneal and residual renal function, was recommended for solute removal of urea as an indicator. However, merely increasing the efficiency of small solute removal without considering a patient's general condition does not necessarily reduce the mortality risk. The residual renal function is an important prognostic factor. A strong correlation between ultrafiltration failure and mortality is observed in PD patients with anuria, and appropriate management of body fluid volume is important. β2-microglobulin (β2-MG) has a strong impact on prognosis, but its concentration levels correlate with the residual renal function. Regulating this with PD prescription is difficult. The combined therapy of PD and HD (PD + HD) is efficacious in removing solutes, especially those larger than β2-MG. There is currently no clear definition for the optimal dialysis method, so the guideline classifies this into the following three categories: (1) optimal dialysis from the perspective of substance removal and ultrafiltration, (2) optimal dialysis from the perspective of circulatory dynamics, and (3) optimal dialysis during combined therapy. Details of each procedure are summarized below. Optimal dialysis from the perspective of solute removal and ultrafiltration (includes β2-MG, acid-base status) Mass transfer in the peritoneum The primary mechanism of PD is a molecular diffusion-based mass transfer and osmotic flow-based surplus water removal. Various numerical models have been utilized for PD analysis, and its mechanisms have been clarified. In PD, the capillary vessels in the visceral and parietal peritoneum serve as semi-permeable membranes and correspond to the hollow fibers found in dialyzers used in HD [57]. Diffusion, osmosis, and convection in the peritoneum If only solvents (i.e., water) can pass through a semi-permeable membrane with no solute, then fluid on the lower-concentration side will flow into the higher-concentration side over time through osmosis. The minimal amount of pressure needed to stop this phenomenon is referred to as the osmotic pressure. If some solutes can pass through the semi-permeable membrane, then molecular diffusion based on the concentration gradient from the high- to low-concentration side is generated. Water can be moved in the same direction as molecular diffusion if positive pressure is applied on the fluid on the higher-concentration side or if negative pressure is applied on the fluid in the lower-concentration side. This water movement is associated with solutes and is generally referred to as filtration. The migration phenomenon that accompanies fluid flow is referred to as convection; this filtration through the membrane is also a form of convection. HD conducts filtration (ultrafiltration) by applying negative pressure on the side of the dialysis fluid, but PD uses a fluid that is hypertonic compared to bodily fluids and conducts ultrafiltration through osmotic pressure differences. For this reason, PD uses a variety of dialysis fluids with different osmotic pressures. There are also routes through which minimal amounts of dialysis fluid are reabsorbed into the body through the lymph ducts. A variety of numerical models have been proposed to evaluate peritoneum permeability. Classic model and the three-pore model In the first PD model by Henderson, the peritoneum was homogeneous and it was assumed that only molecular diffusion would occur [58]. Most transport phenomena across the peritoneal membrane for solutes smaller than β2-MG could be explained by a classic mass transfer model, assuming that the peritoneum consists of a single pore type. However, this precludes the permeability of any macromolecules that exceed the size of albumin, which has a molecular weight (MW) of 66,000. Rippe et al. proposed a two-pore model, which assumed that there were two kind of pores in the peritoneum [59, 60]. However, it was later shown that this model could not explain the mechanism of ultrafiltration. Thus, Rippe et al. devised and proposed the three-pore model [60], which assumed that there was a third pore type called the ultra-small pore that only managed water transport across the peritoneum. This ultra-small pore represented the cell-permeable water pathway (aquaporin 1 [61]) and is also called a cell pore. Solute removal and clearance Molecular diffusion is the primary mechanism for solute removal in PD. The realistic recommended dialysis dose in PD is set as a total weekly Kt/V for urea ≥ 1.7. However, merely changing this value does not affect prognosis. Molecular diffusion is the primary mechanism for solute removal in PD. Only solutes up to a MW of around 3000 could be filtered through HD when CAPD was proposed in the late 1970s. As such, further attention was given to PD, which can remove larger solutes. However, the super-high-flux (former IV-type and former V-type in the Japanese reimbursement system) dialyzers currently used for over 90% of patients in Japan can remove 200–300 mg of β2-MG with 4 h of HD. In contrast, around 20 mg of β2-MG is removed with standard CAPD in 1 day. HD is over 5 times more efficient when compared over 2 days. Therefore, PD is currently not considered to be superior to HD in terms of removing larger solutes. By rearranging the definition of kidney clearance in living bodies, PD clearance KP is defined as follows: $$ {K}_{\mathrm{P}}=\frac{C_{\mathrm{D}}\times {V}_{\mathrm{D}}}{C_{\mathrm{P}}\times t} $$ where CD is the dialysis fluid concentration [mg/mL], CP is the plasma concentration [mg/mL], VD is the dialysis fluid drainage amount [mL], and t is the dialysis fluid storage time [min]. All four factors on the right-hand side change with time. Therefore, KP itself varies considerably with time. Importance of optimal dialysis Decisions on whether a sufficient dialysis dose is provided are based on the following criteria: No treatment (dialysis)-related complications have occurred General condition is favorably maintained Diagnostic criteria for prognosis are within the favorable range However, even if it appears that conditions (a)–(c) are satisfied, long-term PD can lead to advanced deterioration of the peritoneum, and encapsulating peritoneal sclerosis (EPS) can occur after the termination of PD. In other words, needlessly continuing PD can be harmful. The conditions mentioned above explain why an optimal dialysis method cannot be clearly defined. A strong negative correlation is observed between the residual renal function and blood β2-MG concentration in PD patients. In other words, removal of larger solutes that exceed the size of β2-MG is dependent on the residual renal function, and the removal through PD has proven to be difficult. With this in mind and using the loss of the residual renal function as a boundary, we need to consider switching the treatment from PD to HD. Weekly Kt/V for urea (Kt/V) Kt/V is a non-dimensional dialysis dose defined as the product of urea clearance (K) and treatment time (t) divided by the patient's total body fluid volume (V). Results of the CANUSA study recommended a total weekly Kt/V for urea of 2.0–2.1, which combines both the residual renal function and the peritoneum [62, 63]. Generally, CD/CP ≈ 1.0 is achieved for urea if the dialysis fluid is indwelled for over 4 h. If the patient is experiencing anuria, the K in Eq. (1) is replaced by the PD clearance KP, making the following equation self-evident: $$ {K}_{\mathrm{P}}\bullet t\approx {V}_{\mathrm{D}} $$ Typically, VD ≈ 9.0 L/day ≈ 63.0 L/week, so if the body fluid volume of a 60-kg patient is set as V ≈ 36.0 L, a weekly KPt/V is: $$ \frac{K_{\mathrm{P}}t}{V}=\frac{V_{\mathrm{D}}}{V}\approx \frac{(63.0)}{(36.0)}=1.75 $$ In other words, it is challenging to achieve Kt/V ≥ 2.0 with PD alone. Additionally, the ADEMEX study showed that no changes in prognosis were observed even if Kt/V is increased on purpose [64]. Moreover, data from Hong Kong showed that favorable prognoses were observed even at Kt/V ≤ 2.0 [65]. Currently, a total Kt/V ≥ 1.7 is recommended as a realistic value [66,67,68]. Weekly creatinine clearance (Ccr) Ccr is normalized with a Westerner's body surface area (peritoneal surface area) of 1.73 m2 when compared to standard values across the world. Ccr = 60 L/week/1.73 m2 is used as a target value for PD [62], but subsequent analyses supported a formerly used value of 45 L/week/1.73 m2 [69]. However, creatinine concentrations are also generally high in patients whose muscle mass is assured. Analysis of 4 years of statistics from the Japanese Society for Dialysis Therapy (JSDT) showed an inverse correlation between mortality risk and creatinine concentration [70]. Based on these facts, Ccr is not recorded as an indicator of the optimal dialysis in the current guidelines. Kt/V for urea and prognosis Reports have indicated decreases in mortality risk with increased Kt/V in HD [71]. However, the HEMO study published in 2002 [72] showed no statistically significant differences in prognosis between groups with a single pool Kt/V of 1.32 and 1.71. Comparisons between the 2-year survival rates in 965 PD patients separated into two groups using the K/DOQI guideline [63] target Ccr = 60 L/week/1.73 m2 (total Kt/V = 2.13) and those who maintained Ccr = 45 L/week/1.73 m2 (total Kt/V = 1.62) showed no statistically significant differences (ADEMEX study [64]). In other words, simply increasing the small solute removal efficiency in both HD and PD without considering the patient's general condition did not reduce mortality risk. Residual renal function, which was clarified with the NECOSAD research [73], is another important prognostic factor. Additionally, the EAPOS research, which focused on APD patients with anuria [74], indicated that there was a strong correlation between ultrafiltration failure and mortality rate increase. Methods that increase dialysis dose The removal efficiency of larger solutes in PD is almost entirely unrelated to the amount of dialysis fluid and is dependent on the residual renal function. Methods that combine PD and HD (PD + HD combined therapy) are efficacious in improving the removal of these solutes. Increasing the amount of dialysis fluid can increase the dialysis dose for small solutes but increasing the number of dialysis fluid exchanges during the day is not practical. The only way to achieve this is through APD, which can actively exchange dialysis fluid while the patient is asleep at night. Ultrafiltration in PD is conducted by establishing an adequate osmotic pressure difference between the dialysis fluid and body fluid (blood). Ultrafiltration failure can be based on catheter abnormalities or increased peritoneal permeability. Ultrafiltration can also be dramatically improved by using dialysis fluid with macromolecular dextrin (icodextrin) as an osmotic agent. Ultrafiltration and selection of dialysis fluid Ultrafiltration in PD is conducted by establishing an adequate osmotic pressure difference between the dialysis fluid and body fluid (blood). Glucose is used as an osmotic agent because it is cheap, has no biological toxicity within the physiological concentration range in the bloodstream, and serves as a source of energy after absorption. There are no significant differences between the dialysis fluid used in PD and HD, except that the former does not include K+ and uses lactic acid as a buffer (bicarbonate has also recently been used). However, each dialysis fluid manufacturer prepares three fluid types with different sugar concentrations, as the PD dialysis fluid induces ultrafiltration through osmotic pressure differences. The osmotic pressures of each of these are around 460, 400, and 360 mOsm/kg; these are referred to as "high-concentration fluid," "moderate-concentration fluid," and "low-concentration fluid," respectively. As is clear from its intended use, even the "low-concentration fluid" is hypertonic compared to the body fluid (approximately 300 mOsm/kg), and continuous glucose exposure to the peritoneum could promote peritoneal degradation. This is exacerbated with higher sugar concentrations; currently, the high-concentration fluid is not used in Japan, and the prescription of moderate-concentration fluid is limited. Heat sterilization accompanies the caramelization of glucose when the dialysis fluid is neutralized, and glucose degradation products (GDPs) are produced. In the past, an acidic solution with a pH of around 5.0–5.5 was used as the dialysis fluid to avoid this degradation reaction. However, there were concerns regarding GDP toxicity and its non-physiological nature due to its low pH. Neutralized dialysis fluids with minimal GDPs are currently being used in Japan. Causes of ultrafiltration defects and countermeasures Ultrafiltration defects involve cases of catheter abnormalities and cases based on increased peritoneal permeability. The following reasons can be cited in case of the former, which are then addressed as follows: Catheter misplacement Omentum entanglement with a side hole Excessively tightened catheter suture in the abdominal wall Kinking of the catheter or connection tube The following countermeasures can be conducted in case of the latter: Use of a higher concentration of dialysis fluid Frequent exchanges of dialysis fluid Use of macromolecular dextrin (icodextrin) products Implementation of peritoneal resting (short-term, medium-term, long-term) The peritoneal glucose exposure amount increases with the use of a higher concentration of dialysis fluid (a), which can further promote peritoneal permeability. The frequent exchange of dialysis fluid (b) can be successful if an APD cycler is used and the dialysis fluid is exchanged over a short period at night. The use of macromolecular dextrin products (c) is limited to specified dialysis fluid manufacturers, so this cannot be applied for all patients. Furthermore, reports have indicated that the long-term use of icodextrin dialysis fluid (limited to cases of once per day) has gradually resulted in the loss of ultrafiltration [75], so attention must be taken. The implementation of peritoneal resting (d) is not necessarily efficacious for all patients, either; however, there have been many reports on this method, with the oldest going back to the 1990s [76,77,78,79,80,81]. Acid-base equilibrium Neutralized dialysis fluids are primarily used in Japan. There are no substantial in-treatment changes in pH in CAPD, and values are confined to a relatively narrow normal range. Dialysis fluid pH and buffer (alkaline agent) As previously mentioned, acidic heat-sterilized dialysis fluid was used in the past. However, reports have indicated that even if dialysis fluid with a pH of 5.2 is retained in the abdominal cavity, its pH value increases to over 6.5 within 15 min [82]. This is due to the influence of residual liquid, which had not drained during the fluid exchange. Furthermore, the dialysis fluid is only 2 L compared to over 30 L of the total body fluid. However, there were concerns regarding this brief peritoneal deterioration due to non-physiologic pH dialysis fluid exposure with each infusion. The utilization rate of neutralized dialysis fluid has rapidly increased in Japan since its introduction to the market in 2000. Low-GDP neutralized dialysis fluid is primarily used, with the exception of some icodextrin dialysis fluids. Collaborative multi-facility research, which evaluated neutralized dialysis fluids, reported no changes to peritoneal fibrosis markers but showed improvements in mesothelial cell markers [83]. Pentosidine also decreased with neutralized fluid use, but its removal takes time, so it should ideally be used at the early stage of initiation. Based on these findings, the necessity of neutralized fluids in the long-term use of PD has not been internationally proven, but it should be recommended as a finding from Japan. Many PD fluids include lactate as a buffer, but high lactate concentrations are considered non-physiologic. Plasma bicarbonate (HCO3−) increases over time in PD patients, and there have been concerns of excessive acidosis correction or alkalosis-based vascular calcification risk. Based on these aspects, dialysis fluids with reduced amount of lactate or those with bicarbonate have been developed. Acid-base balance correction in body fluids Metabolic acidosis is the pathology of renal failure, so alkaline agent (lactic acid)-based acidosis correction is essential. Compared to intermittent treatments such as HD, where pH can vary considerably, CAPD results in minimal changes in pH and is confined to a relatively normal range. Albumin synthesis from amino acids in the liver is inhibited in the acidic pH range of body fluids. Clinicians have often indicated the superior quality of CAPD because of its acid-base balance correction effects [84, 85]. Optimal dialysis from the perspective of circulatory dynamics Blood pressure management The most important general principle of antihypertensive therapy is the optimization of "dry weight." Antihypertensive medication should be considered in cases where achievement and maintenance of "dry weight" do not result in blood pressure reduction. The relationship between blood pressure and prognosis is represented by the so-called "reverse epidemiology." However, care must be taken as this is the result of an observational study and has no intervention-based evidence. The objective of blood pressure management is to maintain systolic and diastolic blood pressure (DBP) below 140 and 90 mmHg, respectively. Special care must be taken to ensure that systolic blood pressure (SBP) is excessively reduced below 110 mmHg. Body fluid volume (extracellular volume) overload is the most important cause of hypertension. Renin/angiotensin/aldosterone inhibitors and loop diuretics should be considered for antihypertensive medication. Blood pressure should be controlled with consideration for circadian, weekly, and seasonal variabilities of home blood pressure. Epidemiology and pathology The FY 2016 Statistics collected by the Japanese Society for Dialysis Therapy indicated that a significant cause of death among dialysis patients was attributed to cardiovascular disorders such as cardiac insufficiency, cerebrovascular diseases, and myocardial infarctions, with 36.1% of patients being affected [86]. Hypertension is a significant risk factor for arteriosclerosis, which causes these diseases, and its prevalence at the time of dialysis initiation is between 80 and 90%. Therefore, it is thought to be closely related to patient prognosis. Compared to HD patients, PD patients have a low frequency of hypertension or left ventricular hypertrophy (LVH) and consequently have fewer cases of severe arrhythmia [87]. However, contributing causes to hypertension in HD patients include (a) body fluid volume (extracellular volume) overload, (b) renin-angiotensin system abnormalities (inappropriate angiotensin II reactivity to volume load), (c) increased sympathetic nerve activity, (d) endothelium-dependent vasodilation disorders, (e) uremic toxins, (f) genetic factors, and (g) erythropoietin. In particular, body fluid volume overload has been cited as the primary contributory cause, and reports have indicated that its correction has resulted in controlled blood pressure levels in over 60% of patients [62, 88,89,90]. In other words, the most important general principle of antihypertensive therapy in dialysis patients, including those on PD, is the optimization of "dry weight." Antihypertensive medication should be considered in cases where achievement and maintenance of "dry weight" do not result in blood pressure reduction. The prevalence of hypertension in PD patients is between 69 and 88%. However, there are several different definitions of hypertension. Hypertension prevalence was 88% when hypertension was defined as an SBP of over 140 mmHg or a DBP of over 90 mmHg or when antihypertensive drugs were administered [91]. On the other hand, hypertension prevalence was 69% when hypertension was defined as having blood pressure values exceeding 140/90 mmHg during the day or 120/80 mmHg at night [92] over 30% of the time during 24-h ambulatory blood pressure monitoring (ABPM). Prognosis based on blood pressure levels in PD patients is represented by "reverse epidemiology," [93]. The total mortality rate significantly increases when blood pressure level decreases below the control SBP value of 111–120 mmHg. However, no significant increases in the total mortality rate were observed even if the SBP levels exceeded the control. A virtually identical trend was also observed in HD patients [94]. Target blood pressure It is well-known that hypertension results in poor prognosis due to its ability to cause cardiovascular complications in the general population and in those with CKD. However, reports have indicated that the relation between blood pressure and prognosis in PD patients represents the so-called "reverse epidemiology," as described above [93]. Regarding the relationship between mortality and blood pressure, prognosis also improves as blood pressure increases in the early stages of PD initiation (within 1 year). However, this relationship was not apparent in patients who were waiting for kidney transplants and had dialysis initiated within the previous 6 months [95]. The same study also showed that prognosis worsens with an increase in blood pressure 6 years after dialysis initiation [93]. These observational studies show that it is challenging to set a target blood pressure. No research has directly shown improved prognosis in PD patients with antihypertensive therapy. However, antihypertensive therapy has improved prognosis in the general population and in those with CKD [96]. An ideal blood pressure level should always be maintained below 140 mmHg (SBP) and below 90 mmHg (DBP). These statements were shared by the full joint review conducted by the European Renal Association (ERA-EDTA), the European Society of Hypertension (ESH) [97], and the International Society for Peritoneal Dialysis (ISPD) [98], with all proponents making similar recommendations. (In the current guidelines of JSH2019 and/or CKD2018, the target blood pressure has been modified according to the age (under 75 years or older), presence or absence of proteinuria and diabetes.) The previously mentioned observational studies [93] also reported worsened prognosis when the SBP was below 110 mmHg. CKD research in Japan has shown that cardiovascular events increase and prognosis worsens when the SBP falls below 110 mmHg [99]. The same is thought to apply to PD patients, and special care must be taken when attempting to excessively lower blood pressure. "Dry weight" management Body fluid volume (extracellular volume) overload is the most important cause of hypertension also in PD patients [100, 101]. Systematic reviews of body fluid analyses using bioimpedance analysis (BIA) were performed, where dry weight was calculated, and the relationship between total mortality and blood pressure was determined. These reviews showed that the total mortality rate did not improve, but significantly favorable control of the SBP was obtained [102]. Reports have also indicated that decreases in residual renal function (RRF) are rapid, regardless of whether body fluid volume overload is transient or constant [103]. Excess body fluid volume continues to decrease RRF, and it is thought that hypertension becomes more likely as a result. Reports have indicated that there is a relationship between hypertension and the high transport (i.e., peritoneal permeability is increased) in peritoneal function. This is thought to be because ultrafiltration failure stems from high transport, which results in excess body fluid volume, increased frequency of hypertension during the day and night, and increased left ventricular mass index (LVMI) [104]. The frequency of bodily fluid overload is higher in PD patients than in HD patients, and as a result, the frequency of antihypertensive agent administration is higher in PD patients as well [105]. Increased duration of PD also induces excess bodily fluid through ultrafiltration failure, which worsens blood pressure control [106, 107]. Selection of antihypertensive medications Maintenance of urine volume in PD patients with RRF is important from a blood pressure management perspective. There were no statistically significant differences in RRF maintenance between the groups that used furosemide and those that did not. However, the urine volume and urinary Na excretion amount were both elevated in the former group [108]. Reports have indicated that tolvaptan not only increases the urine volume but also increases the urinary Na excretion and improves RRF [109]. (The patients' background of reference [53] is basically heart failure with preserved ejection fraction (HFpEF).) Reports in PD patients with cardiac failure have also indicated improved RRF with tolvaptan. Tolvaptan was reported to decrease both extracellular water (ECW) and intracellular water (ICW), and to improve body fluid control without occurrence of hyponatremia [110]. Reports on mineralocorticoid receptor antagonists (MR antagonists) such as spironolactone showed preventative effects against decreasing left ventricular ejection fraction and cardiac hypertrophy [111]. RRF maintenance is important when considering the prognosis of PD patients. Reports on renin/angiotensin inhibitors indicated that there were no differences in antihypertensive effects from angiotensin-converting enzyme inhibitor (ACEI) ramipril [112] and angiotensin receptor blocker (ARB) valsartan [113], compared to placebos. However, they had a significant effect on RRF maintenance. On the contrary, another report has indicated that ACEIs and ARBs had no significant effects on RRF maintenance and the elapsed time up to anuria [114]. Therefore, further research is required. Ultrafiltration is conventionally conducted with high glucose concentration PD dialysate injections under conditions of body fluid volume overload. However, Japan does not use high glucose concentration PD dialysate as this worsens the risk of EPS onset. Both SBP and DBP showed statistically significant decreases when icodextrin was used instead of moderate glucose concentration dialysate [115]. Blood pressure variability Night-time hypertension in PD patients is often represented by non-dipper type blood pressure variations [116, 117]. There is a high prevalence of early-morning hypertension as well [118]. Increased body fluid volume is considered to be one of the causes. Increase in ultrafiltration is accompanied by a decline in blood pressure [115]. It is obvious that PD patients have less blood pressure variations over 1 week compared to HD patients who need to receive short-term ultrafiltration at regular intervals. However, this has only been proven in a small number of patients [119]. Early-morning blood pressure in HD patients before 2-day interval dialysis is extremely high and is associated with a higher mortality rate. However, these types of changes are not evident in PD patients because PD is continuous dialysis [120, 121]. With regard to seasonal variation, blood pressure decreases during the warmer seasons and increases during the colder seasons [122]. Therefore, prevention of cardiovascular diseases must involve blood pressure control while considering intraday, daily, weekly, and seasonal variations of blood pressure. Body fluid volume management (maintenance of optimal weight "dry weight") It is important to avoid body fluid volume (extracellular fluid volume) overload and maintain an optimal "dry weight." As shown in the NECOSAD study, the residual renal function regulates prognosis and is an important factor [73]. The EAPOS study [74] on APD patients with anuria also indicated strong correlations between ultrafiltration failure and increases in mortality rate, which highlights the importance of "dry weight." As shown above, optimal weight is an important element in PD patients; however, similar reports have stated that patients with body fluid overload in Japan also comprised over 30% of cases that required PD [123]. BIA is a favorable indicator for determining "dry weight" [102], but this is not currently used in all facilities. Furthermore, different frequency bands may be used depending on the device, ICW may vary according to physical exhaustion, and whether the absolute value evaluations can be applied to clinical settings is unclear. It is thought that this should be used as a reference and changes over time should be observed [124]. There are conventional methods for determining dry weight in HD [88], which is clinically determined using the following evaluations in PD: (a) No peripheral edema is present during a physical examination, (b) no pleural effusion or pulmonary congestion is present during chest X-ray, and the cardiothoracic ratio is below 50% for males and 53% for females, (c) atrial natriuretic peptide (ANP) concentration is between 50 and 100 pg/mL, and (d) inferior vena cava diameter and respiratory changes {evaluated using the diameter of the expiratory inferior vena cava (IVCe) and the diameter of the inspiratory inferior vena cava (IVCi). IVCe between 14 and 20 mm and collapsibility index = (IVCe − IVCi) / IVCe ≥ 0.5 is considered normal} are used as references, but these are also used when observing changes over time [125]. There are no specific body fluid volume management therapies in PD patients. The detail of therapy is discussed in the section "Blood pressure management." However, the salt intake amounts outlined by the Japanese Society of Nephrology using the PD ultrafiltration volume and urine volume should be targeted and maintained. Salt intake can be estimated as [ultrafiltration volume (L) × 7.5 g] + [0.5 g per 100 mL of residual renal urine volume] or 0.15 g/kg/day with an upper limit of 7.5 g in cases where the urine volume measurement is difficult [126]. Please refer to "Chapter 3 Nutritional management" for further detail. Optimal dialysis in combination therapy (combined PD + HD therapy) Combination therapy is a treatment method that improves insufficient dialysis (solute removal deficiency over hydration) in PD-only treatment. Combined PD + HD therapy (definitions, current status, objectives) Combined PD + HD therapy incorporates HD once every 1–2 weeks. As of December 31, 2016, over 20% of patients use combination therapy in Japan. Supplementing PD-only therapies with HD therapy in patients with insufficient solute removal or overhydration is important. Definition and current status Combined PD + HD therapy is a treatment method that adds HD during PD therapy about once every 1–2 weeks [127]. Combined PD + HD therapy has been implemented since the 1990s, but HD service fee requests were not recognized at the time, and it was not widely used [128]. Cases that utilized combined PD + HD therapy increased after the artificial kidney (HD/HDF) calculation/demands for a frequency of once a week in PD patients were authorized in April 2010. Statistical surveys conducted by the Japanese Society for Dialysis Therapy in 2016 indicated that there were 1831 cases in which combined PD + HD therapy was implemented, which was over 20% of all PD cases (total PD patient number 9021). Therefore, it is clear that combined PD + HD therapy has become an established treatment method. When evaluating combination conditions by PD history, 3.4% of patients with a PD history of less than 2 years used this therapy, whereas 53.1% of those with over 8 years of PD history have used it, indicating that the number of patients who use combination therapy increases over time. Decreased or no residual renal function (RRF), accompanied by extended PD duration and decreased solute and water removal volume due to deteriorated peritoneal function, has been thought to cause this. The majority of patients (over 80%) combine HD with PD at a frequency of once per week [86]. Objectives of combination therapy The objectives of combined PD + HD therapy are to improve insufficient solute removal and overhydration. PD retains dialysis fluid within the abdominal cavity and is a renal replacement therapy that corrects abnormalities of body fluid composition and volume in renal failure patients through diffusion-based solute removal and osmosis-based ultrafiltration. However, there are limits to the volume of dialysis fluid that can be stored within the abdominal cavity, as well as the time and frequency that can be stored in a single day (24 h). There are also limits to the removal of uremic solutes. For these reasons, the commencement of PD therapy from the point where RRF is present is recommended to sufficiently assure solute removal volume [66]. Solute and water removal with PD therapy while RRF is still present is the sum of contributions from PD and RRF. However, after commencing PD therapy, RRF declines and disappears over time. Solute removal will depend entirely on PD therapy, and the amount of solute removal will decrease. Combination therapy with HD as a supplement for solute removal is becoming an effective strategy for cases where PD therapy was started after the loss of RRF or where RRF was lost following long-term PD therapy. As of December 31, 2006, the most common reason for withdrawing from PD therapy in Japan was peritonitis (27.7%), followed by ultrafiltration failure (15.5%) and dialysis deficiencies (13%). Therefore, insufficient solute removal and ultrafiltration failure were the most common reasons for PD therapy withdrawal. These survey results also show the importance of combining PD with HD to supplement ultrafiltration failure and insufficient solute removal as a measure for prevention of PD withdrawal [129]. Calculation of solute removal volume in combination therapy There is no unified calculation method for determining solute removal in combination therapy. Calculation of solute removal volume in combined PD + HD therapy It was previously mentioned that an objective of combined PD + HD therapy was to supplement insufficient solute removal. The calculation of solute removal volume is important when discussing the adequate dialysis dose during combined PD + HD therapy. PD is a continuous therapy method in cases where RRF is still present, so RRF-based creatinine (CCr) and urea (CUn) clearance can be simply added to PD-based CCr and CUn. Meanwhile, CCr and CUn from the intermittent therapy method (HD) cannot be simply added to RRF- and PD-based CCr and CUn. For these reasons, there have been attempts to evaluate dialysis doses during combined PD + HD therapy and calculate solute removal to compare the solute removal volume in PD + HD cases to other modes of therapy (PD only, HD only). Kawanishi et al. calculated the solute removal volume based on PD effluent, HD dialysate [130], and combined PD + HD therapy using equations such as equivalent urea renal clearance (EKR) [131]. Calculations based on EKR are simple because PD effluent and HD dialysate are not collected but are limited in their criteria. Furthermore, it can overestimate the solute removal volume in certain cases [131]. Yamashita et al. examined PD effluent, HD dialysate, and 24-h urine and reported that the concentration of solutes in these fluids was used to calculate the total solute removal volume, which in turn was used to determine a concept called "the clear space." This was deemed to be a useful indicator of solute removal during combined PD + HD therapy and as a comparative metric with other treatment methods [132]. As stated above, there is no consensus regarding the calculation of dialysis dose during combination therapy and no unified solution. However, it is impossible to discuss issues relating to target dialysis doses in combined PD + HD therapy without a standardized calculation method of dialysis dose. Additionally, various clinical research-based analyses are also impossible. We anticipate unified solutions on the calculation methods of solute removal volumes during combined PD + HD therapy in the future. Effects of combination therapy Circulatory dynamics can be improved with combination therapy. Achievement of an optimal "dry weight" is possible. The effects of combination therapy include the following: Improved blood pressure Decreased serum β2-MG concentration Decreased serum creatinine concentration Improved anemia Decreased blood CRP concentration Increased PD-based ultrafiltration, increased CCr Maintenance of serum albumin concentration Body fluid/circulatory dynamics management in combined PD + HD therapy Body fluid management is unstable in PD patients compared to that in HD patients because of the dependence on urine volume in cases where water removal is conducted through ultrafiltration of the peritoneum or when RRF is present. Reports have indicated a high mortality rate in high transporters, which is often a sign of ultrafiltration failure in PD patients [133], with 55% of PD withdrawals being due to failure in controlling body fluid [134]. In such cases of ultrafiltration failure and excess bodily fluid conditions, combining PD with HD assures ultrafiltration and is efficacious in the maintenance of suitable body fluid conditions where edema and hypertension are not present. An analysis of 53 patients who transitioned from PD-only therapy to combined PD + HD therapy by Matsuo et al. showed that combined PD + HD therapy resulted in decreased bodyweight (P < 0.01), SBP (P = 0.03), administered antihypertensive agent dose (P < 0.01), and atrial natriuretic peptides (ANP) in the blood and that combining with HD is efficacious in controlling body fluid volume [135]. Reports have indicated that water removal from HD-based ultrafiltration, as well as PD-based effluent volume, increased. Analyses done by Suzuki et al. involving 7 patients who underwent combined HD once a week with PD therapy showed that the effluent volume increased from an average of 890 to 1150 mL/day [136]. The mechanisms by which PD-based effluent volume improved by combining with HD are not clear, but it is assumed that body fluid/circulatory dynamics management became easier with combined PD + HD therapy. With combined PD + HD therapy, the achievement of dry weight, which is difficult with PD-only therapy, becomes possible due to guaranteed ultrafiltration from HD implementation. However, the dry weight is only achieved for the day when HD is conducted, and the long-term clinical effects of this are unclear. Effects of combined PD + HD therapy The reported effects of combined PD + HD therapy were mentioned in the key points [135,136,137]. As mentioned previously, reports on improved circulatory dynamics caused by combined PD + HD therapy include decreased body weight (potentially due to improvements of overhydration), SBP, antihypertensive agent administration, and blood ANP concentrations. Reports on solute removal by Matsuo et al. indicate decreased serum β2-MG concentration and serum creatinine concentration [135]. Dialyzers with high β2 microglobulin (β2-MG) removal performance (β2-MG clearance of over 50 mL/min) have been increasingly used following the functional classification of dialyzers in 2006. The removal performance [138] is thought to be due to increased β2-MG removal from combining with HD. Matsuo et al. also reported increased hemoglobin concentration (average increased from 8.2 to 10.7 g/dL) and decreased erythropoietin dose used (average decreased from 5800 units/week to 4556 units/week) [135]. Hemoglobin value comparisons between the four treatment method categories of "HD (F) only," "HD (F) only - PD catheter present," "PD only," and "PD + HD combined" in statistical surveys conducted by the Japanese Society for Dialysis Therapy in 2009 showed that hemoglobin values were the highest in the "PD + HD combined therapy" category for both males and females (average values of 10.8 g/dL and 10.5 g/dL for males and females, respectively) [71]. Improved anemia effects were also maintained with the combined PD + HD therapy. The mechanisms for decreased erythropoietin dose and increased Hb values in combined PD + HD therapy are unclear. However, increased solute removal due to the combination with HD is thought to be the cause [139]. CRP as an inflammation marker is thought to be a risk factor for worsening dialysis-related complications and malnutrition. However, reports have also indicated decreased CRP concentrations as a result of combined PD + HD therapy (average decrease from 0.5 to 0.2 mg/dL) [135]; improved nutritional status and the prevention of the onset of complications such as arteriosclerosis are also expected from this therapy. Suzuki et al. [136] and Kawanishi et al. [137] reported increased ultrafiltration rate with combined PD + HD therapy. Additionally, Suzuki et al. reported increased CCr due to PD [136]. Increased ultrafiltration is thought to be due to improvements in increased peritoneal permeability. It is thought that solute permeability is generally controlled and that solute removal decreases if peritoneal permeability is improved (controlled). However, as mentioned in "Solute removal and clearance," PD therapy clearance KP is set as the product of CD (solute concentration in effluent) and VD (volume in effluent). As such, KP can increase by supplementing decreased solute permeability in the peritoneum with increased ultrafiltration (i.e., VD increasing even if CD decreases). The increased CCr in PD reported by Suzuki et al. was thought to be due to an increase in ultrafiltration (VD) from improvements in increased peritoneal permeability, supplemented by decreased creatinine concentration (creatinine CD) in PD effluent due to improved peritoneal permeability increases (i.e., decreases). Moreover, analyses by Ueda et al. on patients who underwent combined PD + HD therapy since the initiation of dialysis reported that combined PD + HD therapy maintained statistically significant serum albumin concentrations compared to PD-only patients. Transitioning to combined PD + HD may be one alternative treatment strategy for patients with decreased (or decreasing) serum albumin concentrations during PD-only therapy [140]. The effects of combined PD + HD therapy have been explained here. There are many reports wherein combined PD + HD therapy has been proven to be a favorable treatment modality compared to PD-only therapy. Unfortunately, there are relatively few research reports on these effects, and we look forward to future analyses and investigations based on prospective observational studies and randomized controlled research on multiple patients. Optimal dialysis in combined PD + HD therapy Establishing solute removal volume in combination therapy is currently difficult. Optimal dialysis should be determined based on the clinical inspection values and physical examination findings. There is no edema or hypertension with regard to circulatory dynamics, and a suitable body fluid condition is maintained. No hypertension (blood pressure below 140/90 mmHg) No pulmonary congestion in chest X-rays The cardiothoracic ratio is below 50% Inspection results reveal the following: Hb concentration between 10 and 12 g/dL is maintained Serum β2-MG below 30 mg/L is maintained No symptoms of renal failure such as malnutrition or anorexia Optimal dialysis indicators in combined PD + HD therapy The solute removal volume and solute removal efficiency due to combined PD + HD therapy should be considered when discussing optimal dialysis during dialysis therapy. However, as previously mentioned, calculation methods for the solute removal volume in combined therapy have not been standardized either in Japan or overseas. As such, the optimal dialysis of combined PD + HD therapy can only be discussed in terms of the clinical inspection values and physical examination findings of the present therapy method. The "Adequacy of Peritoneal Dialysis" chapter in the Peritoneal Dialysis Guidelines published by the Japanese Society for Dialysis Therapy in 2009 recommended "maintaining a value of 1.7 for weekly urea Kt/V in combination with residual renal function" for the dialysis dose. The guidelines also recommend that the patient should "change prescriptions or therapy methods in cases where renal failure or malnutrition symptoms appear, regardless of whether the patient is conducting optimal dialysis (i.e., Kt/V is maintained at 1.7)" [66]. In other words, "a weekly urea Kt/V of 1.7" is a necessary condition, and the "absence of renal failure symptoms and malnutrition" is a sufficient condition. The clinical inspection values and physical examination findings are important in addition to the dialysis dose. In addition, the clinical inspection and physical examination findings of combined PD + HD therapy are advocated as indicators for optimal dialysis in the clinical guidelines published by the Japanese Society for Dialysis Therapy and are designated as "appropriate dialysis indicators for combined PD + HD therapy," similar to those in PD- and HD-only therapy. Transitioning from PD-only therapy to combined PD + HD therapy or changing the prescriptions in PD + HD therapy should be investigated to maintain the following physical examination findings and inspection values as optimal dialysis indicators for combined PD + HD therapy. Edema and hypertension are absent, and suitable body fluid conditions are maintained [66, 141, 142]. No pulmonary congestion present in chest X-rays Cardiothoracic ratio below 50% The objectives of combined PD + HD therapy are to improve solute removal deficiencies and excess moisture. As such, the physical examination of body fluid conditions deemed optimal in PD- and HD-only therapy and target blood pressure are indicators that should be achieved during combined PD + HD therapy. A target blood pressure value of "below 140/90 mmHg" is set for patients who undergo combined PD + HD therapy, given that in the "Japanese Society for Dialysis Therapy Guidelines for Management of Cardiovascular Diseases in Patients on Chronic Hemodialysis," it is stated that "in patients under stable long-term maintenance dialysis with no impairment of the cardiac function, we suggest the target of antihypertensive treatment should be blood pressure > 140/90 mmHg before dialysis at the beginning of the week" and mentioned "below 140/90 mmHg" as a target value in the section "Blood pressure management." Furthermore, most patients undergoing combined PD + HD therapy have high levels of overhydration prior to HD, and similar to when conducting PD, blood pressure should be managed to "below 140/90 mmHg" prior to HD. Body fluid condition and blood pressure management are based on appropriate salt and water intake, and depending heavily on ultrafiltration during HD implementation should be avoided. Thus, setting an upper limit of 15 mL/kg/h for the ultrafiltration rate during a single HD session is ideal, and correcting excess bodily fluid with sudden ultrafiltration should be avoided. Maintain a pre-HD Hb concentration between 10 and 12 g/dL The 2015 Japanese Society for Dialysis Therapy Guidelines for Renal Anemia in Chronic Kidney Disease recommended that "target Hb levels that should be maintained for adult peritoneal dialysis (PD) patients are between 11 and 13 g/dL." However, there is no target Hb value mentioned in relation to combined PD + HD therapy. In cases of combined PD + HD therapy, blood is concentrated during HD implementation due to ultrafiltration; prognosis is most favorable when Hb levels are between 11 and 12 g/dL during HD-only therapy cases, and the mortality risk increases when these levels are above 12 g/dL [143]. The target Hb levels in patients with combination therapy were set between 10 and 12 g/dL since. Low erythropoiesis-stimulating agent (ESA) reactivity in PD-only therapy is defined as cases where the remaining kidney function and PD therapy total Kt/V is maintained above 1.7, and the target hemoglobin concentration is not achieved with weekly administrations of 6000 units of rHuEPO or 60 μg of Darbepoietin [144]. This type of low reactivity should be considered for combined PD + HD therapy as well, and the achievement or maintenance of target hemoglobin levels through combination therapy is ideal. The presence of iron deficiencies or inflammation should be investigated, and analyses on the prescription content of HD should be conducted if target hemoglobin concentrations are also not achieved in combined PD + HD therapy even after the administration of 6000 units of rHuEPO or 60 μg of Darbepoetin. Maintain a pre-HD serum β2-MG level of below 30 mg/L Serum β2 microglobulin (serum β2-MG) is a causative agent of dialysis amyloidosis. There are limits to the removal volume of β2-MG in PD, but this can dramatically increase by combining with HD. Serum β2-MG concentration is thought to reflect middle molecule removal conditions to some extent, and reports have indicated improved prognosis when serum β2-MG is maintained at a level below 30 mg/L [132, 141, 145]. In patients undergoing combined PD + HD therapy, attention should be paid to their serum β2-MG concentration levels. The prescription content of their combined HD therapy should be analyzed if serum β2-MG concentration increases are observed. Absence of renal failure symptoms such as malnutrition or anorexia Combined PD + HD therapy is conducted to supplement insufficient solute removal. Therefore, the goal of this therapy is to eliminate insufficient solute removal-based renal failure symptoms such as anorexia, and symptoms should be addressed even during combined PD + HD therapy if these renal failure symptoms are observed by increasing the dialysis dose [66, 141]. Furthermore, transitioning to combined PD + HD therapy should be considered in cases where decreased serum albumin concentrations are observed during PD-only therapy with the objective of maintaining these concentrations [140]. Protective peritoneal effects due to combined PD + HD therapy Reports have indicated possible improved peritoneal function due to combination therapy. However, there is no concrete evidence of this. The time period during which the peritoneum is not exposed to dialysis fluid has conventionally been thought to influence peritoneal function as a dialysis membrane, and reports have indicated that peritoneal function improved after long periods in which the peritoneum was not exposed to dialysis fluid (i.e., "peritoneal resting") [146]. HD therapy is conducted once every 1–2 weeks during combined PD + HD therapy, but the implementation of PD during these HD implementation days is not authorized by the insurance system in Japan. There is a period during PD + HD therapy where PD therapy is not conducted, or in other words, a 1–2-day period during which the peritoneum is not exposed to PD fluid. Matsuo et al. reported that D/P Cr, which is an indicator of peritoneal permeability, significantly decreased following 1 year of combination therapy [135]. Analyses by Moriishi et al. on patients undergoing combined PD + HD therapy reported that D/P Cr significantly decreased in the high-average peritoneal function group and tended to decrease in the low and low-average peritoneal function groups [147]. Comparative analyses on cell activity with peritoneal resting models using human peritoneal mesothelial cells by Tomo et al. confirmed that peritoneal mesothelial cell activity improved with 24 h of peritoneal resting and that 1–2-day PD suspension periods may affect the peritoneal membrane tissue [148]. However, these clinical analyses focus only on D/P Cr, with measurement methods varying according to each study and no histological examinations conducted. Fundamental research also lacks the establishment of animal models. There is no consensus on whether peritoneal function is improved with peritoneal resting at a frequency of 1 day per week during combined PD + HD therapy. This should be researched in the future. Notable points during combined PD + HD therapy Maintain HD quality during combined PD + HD therapy Avoid excessive ultrafiltration due to HD Use a high-performance membrane dialyzer Use purified dialysis fluid The method by which HD is implemented is critical during combined PD + HD therapy. Combined PD + HD therapy is often used to supplement solute or water removal in patients with decreased or absent RRF. Moreover, it can also be used to further enhance solute removal in patients whose RRF is relatively maintained. The maintenance of RRF and urine volume should be a priority in these types of patients, and sudden HD-induced ultrafiltration should be avoided. Any decreases in blood pressure should be closely monitored. Furthermore, high-performance dialyzer usage for HD is standard practice in Japan, and its use is also recommended by the Japanese Society for Dialysis Therapy in their "Clinical Guideline for Maintenance Hemodialysis: Hemodialysis Prescription" [141]. High-performance dialyzers should also be used in combined PD + HD therapy. The back-filtration of dialysis fluid when using these high-performance dialyzers is essential. Decreased RRF can occur as a result of pyrogen influx in the bloodstream, and inflammation due to back-filtration in cases where the biological contamination of dialysis fluid is present. HD must be conducted with purified dialysis fluid, and care must be taken to maintain RRF [149,150,151]. Optimal dialysis in pediatric patients The target dialysis dose in children should exceed the target dialysis dose of adults. Appropriate body fluid management is essential for cardiovascular and long-term prognosis. Age, sex, and physique must be considered in children to set up DW and to determine standard hypertension values. The efficacy of combined PD + HD therapy in children is unclear. Optimal dialysis from the perspective of substance removal and ultrafiltration There is no clear definition of the optimal PD method in children. However, children-specific outcomes such as growth and development need to be considered, in addition to the survival rate, renal failure complications, and QOL effects, all of which are considered in adults. A high protein intake volume requirement per unit body weight and nitrogen dynamics are prioritized in children for their growth, so total weekly urea Kt/V is used as an indicator for small solute removal. Exceeding the dialysis dose recommended in adults was set as the target for children undergoing HD due to the considerations mentioned above, and there is a consensus that substance removal during pediatric dialysis needs to exceed that of adults [152]. However, there is no clear target value of the total weekly urea Kt/V. The Japanese Society of Pediatric Dialysis has set the target total weekly urea Kt/V as 2.5 (3.0 in infants) for pediatric PD [153], and K/DOQI has set this as 1.8 in combination with remaining renal function [54]. Mortality rates in pediatric PD patients are lower than in adult patients; many patients transition into kidney transplantation, making large-scale research that uses mortality or cardiovascular events as outcomes similar to those used in adults difficult when trying to validate these target values. Meanwhile, reports have indicated correlations between cardiac function, Ca/P values, anemia, FGF23, and total weekly urea Kt/V [154,155,156]. However, the number of patients studied is small, and there is insufficient evidence to confirm this. Reports on growth have indicated that remaining renal function is more important than the total weekly urea Kt/V [157, 158], and the influence of remaining renal function in children is considerable. There is currently no clear evidence that increased small solute removal results in improved prognosis. Thus, the target total weekly urea Kt/V value is a minimal treatment target and should be considered as one of the indicators of optimal dialysis. The cause of death in 38% of pediatric PD cases in Japan can be attributed to cardiovascular diseases, and the cause of transitioning from PD to HD treatment in 21% of cases can be attributed to ultrafiltration failure [159]. For these reasons, the optimization of circulatory dynamics is important in children. DW setup similar to that with HD is conducted based on the blood pressure, cardiothoracic ratio, echo-based inferior vena cava diameter, body composition measurements using a bioimpedance analysis, and ANP/BNP, in order to optimize body fluid volume. Blood pressure, cardiothoracic ratio, and body fluid volume can vary according to age, so both age and physique need to be considered [152]. Hypertension is an important finding that indicates excess body fluid. Although there are no large-scale pediatric studies that used mortality or cardiovascular events as outcomes, there have been studies that have used LVH or the internal carotid artery intima-media thickness as midterm outcomes. Hypertension was an independent predictive factor for LVH in pediatric PD and HD patients [160,161,162]. Elevated DBP value in pediatric PD patients was also an independent predictive factor for the internal carotid artery intima-media thickness in pediatric PD patients [163]. The importance of hypertension treatment has been recognized, but antihypertensive target values in pediatric PD patients have not yet been set; currently, values specified by the Japanese Hypertension Treatment Guidelines [164] and the Pediatric Hypertension Determination Standards are used as targets. Optimal dialysis in combined therapy (combined PD + HD therapy) There are no aggregate reports on combined PD + HD therapy in children. Assurance and maintenance of the vascular access necessary for HD in low-body-weight children are challenging. We anticipate further studies on adolescent patients in the future. Chapter 3 Nutritional management Malnutrition in PD patients Malnutrition is a factor that influences prognosis and QOL in dialysis patients. There are currently no systematic reviews on the nutrition of PD patients. PD patients are likely to present excess body fluid and malnutrition. Patients are likely to experience enhanced catabolism due to protein loss during dialysis with subsequent malnutrition. Nutritional management of PD patients should pay attention to weight loss and emaciation, as well as avoid insufficient dialysis due to decreased residual renal function. Malnutrition is a factor that significantly influences prognosis and QOL in maintenance dialysis patients, including those on HD and PD. The frequency of severe malnutrition cases is up to 10% and that of mild to moderate cases is between 30 and 60%, whereas around 30% shows no malnutrition [165,166,167,168,169]. Malnutrition in CKD patients can be caused by either insufficient energy intake from proteins or chronic inflammation, although most cases are a combination of the two [170, 171]. The current guideline for the nutritional management of CKD patients on HD is based on one existing systematic review [172]. However, there are yet no systematic reviews on the nutrition of PD patients. One-third of PD patients present excess body fluid conditions, with or without symptoms [173], and this fluid retention itself can induce malnutrition [174, 175]. Furthermore, protein leakage from the blood vessels into the PD dialysate during dialysis has been shown to promote catabolism and induce malnutrition. It has generally been considered that obesity in HD patients worsens prognosis, whereas in PD patients, reports have indicated that it is weight loss or emaciation that worsens prognosis [176]. There are only a few reports of sarcopenia and frailty in PD patients, which has become a topic of increasing interest in recent years. However, these conditions seem to have significant effects on prognosis and QOL, similar to HD patients [177, 178]. Additionally, long-term PD is known to cause a decrease in residual renal function, and PD-only treatment is known to further worsen the patients' nutritional status due to insufficient dialysis [179,180,181]; thus, care must be taken regarding body weight changes, as well as dialysis efficiency, which should include residual renal function. Nutritional management in PD patients Total energy intake Total energy intake is set at a value of 30–35 kcal/kg/day for standard bodyweight (body mass index (BMI) = 22 kg/m2) and adjusted according to age, sex, and physical activity level. The total energy amount in PD patients should consider the amount of energy absorbed from the peritoneum in addition to energy absorbed from food. Continuous glucose loads can induce triglyceride increase and low high-density lipoprotein (HDL) cholesterolemia. Care must be taken regarding increased body fat and the onset of cardiovascular complications. An energy intake of 30–32 kcal/kg/day is generally considered suitable for patients with diabetic nephropathy, but appropriate energy levels should be set after evaluating the nutritional status of patients individually. A standard bodyweight with a body mass index (BMI) of 22 is used when calculating the total energy intake amount (dietary energy intake amount + peritoneal energy absorption amount). Standard bodyweight (kg) = Height (m)2 × 22 Total energy intake is set at 30–35 kcal/kg/day for the standard bodyweight and is individually adjusted using the patient's age, sex, and physical activity level [182]. The total energy amount in a PD patient is calculated from the amount of energy taken in through diet along with the amount of energy absorbed through the peritoneum. The amount of energy absorbed through the peritoneum is influenced by the PD dialysate glucose concentration, total PD dialysate amount used, retention time, and peritoneal function. For example, the peritoneal energy intake amount is approximately 70 kcal when 2 L of 1.5% glucose concentration PD dialysate is retained for 4 h, 120 kcal when 2 L of 2.5% glucose concentration PD dialysate is retained for 4 h, and 220 kcal when 2 L of 4.25% glucose concentration PD dialysate is retained for 4 h [183]. Furthermore, reports have indicated that continuous glucose loading can result in triglyceride increase and low HDL cholesterolemia, as well as increased body fat and cardiovascular complication risk [184,185,186]. There has also been an increasing number of patients with diabetic nephropathy, and for these patients, 30–32 kcal/kg/day [187] is considered appropriate as they exhibit obese tendencies at energy intake levels of 35 kcal/kg/day. However, ideally, the nutritional status of each patient should be evaluated, and an appropriate energy intake amount established individually. Currently, icodextrin-based dialysates are on the market, in addition to glucose dialysates that use glucose as an osmotic agent. Icodextrin is manufactured and refined through the hydrolysis of corn starch and is structurally a glucose polymer. Icodextrin-based dialysates, which have a much larger molecular mass than glucose, are used for CAPD. Given their large molecular weight, these substances are slowly absorbed into the body through the lymph system, with only 20% and 34% absorbed after 8 and 12 h of retention time, respectively, thereby maintaining an even osmotic pressure gradient over a long retention period. Calculations show that 150 kcal of energy is absorbed when 2 L of icodextrin is retained for 8 h [188]. Therefore, icodextrin-based dialysates have superior ultrafiltration capabilities over conventional glucose dialysates. We anticipate that icodextrin-based dialysates would be more efficient in patients with diabetes, given their absence of glucose loading. Prospective analysis trials that have used icodextrin-based dialysate as a retained dialysate have indicated improved lipid metabolism effects due to reduced glucose loading [189]. Protein intake amount Protein and albumin loss into the PD dialysate is thought to be about 10 g and 2–4 g per day, respectively. Protein intake amount is thought to be related to prognosis and onset of cardiovascular events. The ideal protein intake amount is set at 0.9–1.2 g/kg/day assuming suitable energy intake. However, this needs to be evaluated and adjusted based on multiple nutritional indicators. Protein and albumin loss into the dialysate in PD patients is considered to be approximately 10 g and 2–4 g per day, respectively. However, PD prescriptions can influence the amount lost, with increases in the exchanged PD dialysate resulting in increased protein and amino acid loss [190, 191]. A protein intake amount of over 1.2 g/kg/day for a standard body weight is set as the target in order to replenish these losses [192, 193], with the strain on the residual kidney being considered. So far, there is no clear, standardized protein intake amount that applies to all patients. The regression line between normalized protein nitrogen appearance (nPNA) and %creatinine generation rate (%CGR), which is a muscle component indicator, in the protein intake amount data of 100 PD patients in Japan showed an intersection point between an nPNA value of 0.9 g/kg/day and %CGR value of 100% [194]. This indicates that the protein intake amount in PD patients with a favorable nutritional status is 0.9 g/kg/day. Furthermore, there was only one patient in this analysis that had an nPNA value of over 1.2 g/kg/day, which points to the fact that the protein intake amount of PD patients in Japan is probably significantly lower than in other countries. There are reports that indicate that the amount of protein intake is related to the prognosis and onset of cardiovascular events [195, 196], with levels below 0.8 g/kg/day thought to increase the risk. However, it is thought that this observed favorable prognosis is due to the protein being adequately absorbed and assimilated as a protein in the body, rather than the intake amount itself directly influencing prognosis. In this sense, the protein intake amount should not be the sole basis for dietary guidance, but rather, the evaluation with multiple indicators such as serum albumin levels, lean body mass, and a protein index score [197] would be more effective. In conclusion, the ideal protein intake amount in PD patients in Japan should be 0.9–1.2 g/kg/day, in the premise of a suitable energy intake. However, this needs to be adjusted after evaluating multiple nutritional indicators. Dietary salt intake amount Salt intake management is vital in PD patients who are susceptible to excess bodily fluid issues. The target salt intake amount is to be based on PD ultrafiltration and urine volume and calculated as [ultrafiltration volume (L) × 7.5 g] + [0.5 g per 100 mL of remaining renal urine volume], but this should be considered as 0.15 g/kg/day in actual clinical settings, with an upper limit of 7.5 g. Over 30% of the PD patients are affected by dominant or non-dominant excess body fluid conditions [123]. This highlights the importance of dietary salt intake guidance because excess body fluid conditions due to excessive intake of dietary salt or water are thought to induce hypertension and be a risk factor for cardiovascular complications [198]. Furthermore, potential excess body fluid can make the protein intake amount become insufficient and is recognized as a factor in malnutrition [174, 175, 199]. According to the Japanese Society of Nephrology guidelines, the salt intake amount of PD patients is set as [ultrafiltration volume (L) × 7.5 g] + [0.5 g per 100 mL residual renal urine volume] [200], which is recommended based on its balance with the removal amount. Thus, patients who have completely lost residual renal function should have an upper dietary salt intake limit of 7.5 g per day per 1 L of PD ultrafiltration. In actual clinical settings, this upper limit of 7.5 g should be considered as the target dietary salt intake amount for 0.15 g/kg of body weight. This limit should be reduced to less than 6 g per day in patients with hypertension but should be re-adjusted if it causes malnutrition. With regard to the lower limits, there have been retrospective research reports by Dong et al. conducted on 305 PD patients who had daily dietary salt intake amounts ranging from 1.93 to 14.1 g [201]. Patient groups with low intake amounts divided by a third (average of 3.58 g/day) had significantly higher total and cardiovascular mortality rates, so it is thought that a daily intake of at least 3 g is necessary. Automatic PD research in Europe reported favorable prognosis in anuria patients with over 750 mL of ultrafiltration [202], which corresponds to a salt intake amount of 5.6 g. However, frequent exchange using automatic dialyzers can result in reduced sodium (Na) removal, so measurements of Na removal amount are ideal [203]. In conclusion, dietary salt intake amount guidelines need to consider the urine volume and ultrafiltration amount for individual patients. Evaluation and guidance intervention Evaluation of nutritional status Nutritional care management methods are used for evaluating the nutritional status of patients. Nutritional screening, assessment, and care plans should be used to determine, implement, and re-evaluate the optimal nutrition for individual patients. Nutritional evaluations should include both the regular evaluation of the dialysis dose and the individual evaluation of each patient's condition to properly implement nutritional management. Reports have indicated that there is a strong positive correlation between PD patients' survival rate, prognosis, PD initiation timing, and nutritional status during dialysis implementation [204, 205]. Therefore, the evaluation of nutritional status should be determined while comprehensively evaluating subjective nutrition evaluations, body measurements, body component analysis, and blood chemistry assessments. It is also important to ensure, during nutritional status evaluations, that muscle mass does not decrease over time. The K/DOQI guidelines [206] indicate the evaluation criteria and frequency, with the presence of anemia, as well as the evaluation of potassium, calcium, and phosphorus, as being important. However, many of the existing studies have been conducted on HD patients, with only a few on PD patients. Furthermore, regular dialysis dose evaluation based on collection of 24-h PD drainage fluid and 24-h urine volume is important for monitoring the nutritional status of PD patients. Nutritional evaluations should be conducted at least once every 6 months while taking into consideration possible changes over time. Regular body measurements and body composition analyses can also be beneficial. Nutritional screening Subjective Global Assessment (SGA) As part of the SGA, body weight changes, dietary intake conditions, and the presence of digestive symptoms such as nausea, vomiting, anorexia, and diarrhea are comprehensively scored. Reports have indicated its effectiveness in the nutritional management of PD patients [207,208,209,210]. Other screening methods Other screening methods include the malnutrition universal screening tool (MUST) and the mini-nutritional assessment (MNA). Each facility can select which screening method is suitable. Nutritional assessment Body measurements are important for evaluating nutritional status. Regular height, body weight, BMI, arm circumference (AC), and triceps skinfold thickness (TSF) measurements can be used as nutritional status indicators. The AC and TSF are used to calculate the arm muscle circumference (AMC) and area (AMA). These measurements allow for the simple, straightforward, and indirect estimation of muscle and fat content in the body [211]; however, it should be taken into consideration that they can be influenced by extracellular fluid volume. These indicators have also recently been related to systematic nutritional status or prognosis [202,203,204,205,206,207,208,209,210,211,212,213]. Body component analysis methods The body component analysis methods that currently have the most reproducibility and are recognized as efficient in evaluating the protein content in PD patients are dual-energy X-ray absorptiometry (DEXA) [214, 215] and bioelectrical impedance analysis (BIA) [216, 217]. AMC and TSF are known to be somewhat correlated to DEXA and BIA results [215]. Measurements for BIA must be taken after PD dialysate drainage. Blood chemistry assessment Serum albumin and pre-albumin are used for blood chemistry analysis to evaluate nutritional status. Serum albumin values are representative prognostic factors in patients with end-stage renal failure [216]. However, a wide range of factors influences serum albumin values in PD patients, including inflammation, loss of albumin into the dialysis fluid, and body fluid management conditions. Thus, serum albumin is not the most suitable indicator of the body protein amount and nutritional status [212, 218, 219]. For these reasons, the geriatric nutritional risk index (GNRI) [220], which is frequently used in HD patients, is not very useful as a screening tool in PD patients [221]. Reports have indicated that there is a weak negative correlation between albumin and acute reactive protein concentration in the blood [222], while prealbumin has not yet been clearly shown to be an appropriate biochemical indicator for evaluating the nutritional status of PD patients. Low nutritional status accompanying inflammation is a pattern of malnutrition observed in patients with kidney failure, whereas C-reactive protein (CRP) is important for finding the causes of malnutrition. CRP has been recognized as a predictive factor of mortality risk in PD patients [223, 224]. Evaluation of the protein intake amount The conventional procedure for the evaluation of the protein intake amount is based on interviews with patients. However, considerable training is required for the nutritionists to record, in detail, the individual diet content and intake, as well as to analyze them accurately. Other indices often used include the protein catabolic rate (PCR) (for example, based on the Randerson equation), which is calculated by analyzing 24-h urine collection and total 24-h PD dialysate drainage. Furthermore, reports have indicated that normalized PCR (nPCR), which is normalized based on each patient's individual body weight, is an even more efficient evaluation method of nutritional status compared to PCR [225]. Evaluation of dialysis dose To detect patients' possible malnutrition status due to inadequate dialysis earlier, regular evaluation of PD efficiency, residual renal function, and peritoneal function should be conducted. Nutrition intervention Once the deterioration of malnutrition status is detected, the underlying cause needs to be examined so that countermeasures can be implemented promptly. Clinical treatment conducted by a multidisciplinary team is an effective way to implement nutrition intervention methods. Malnutrition in PD patients can be attributed to factors such as nutritional intake deficiencies, nutrient loss into the dialysis fluid, chronic inflammation, and worsening uremia due to inadequate dialysis. Once deterioration of the malnutrition status is detected, the underlying causes need to be examined so that countermeasures can be implemented promptly. The fundamental pathology of malnutrition resulting from nutrient intake deficiency is energy intake deficiency, thus a suitable nutrition guideline is important [226]. Oral administration of a high-energy liquid diet is useful for patients with severe malnutrition. Meanwhile, along with the aging of the dialysis patients, sarcopenia as previously mentioned has also become a compelling issue, and suitable exercise therapy, in addition to nutritional management, is necessary for both matters. Nutrition intervention is thought to have minimal effects for malnutrition accompanying inflammation, and treatment of the primary disease that causes the inflammation is prioritized. A system that evaluates and introduces nutrition more effectively while also considering the patients' background (e.g., family/social support, patient's ADL, and economic situation) and specific characteristics of dialysis therapy should be constructed. Although there are already nutritional management and nutrition guideline policies for HD patients [227], there are only few reports regarding PD; however, we can suggest nutrition intervention methods for PD patients using HD cases as a reference. Thorough nutritional education at the initial stage Nutritional education during the dialysis initiation period is important for achieving a stabilized lifestyle with dialysis. Diet management should be adjusted between the pre-dialysis and dialysis initiation periods, with nutrition consultations that evaluate the appropriate salt intake, protein control, and energy intake. Patients should be able to recognize the key methods for selecting foods and ensure that the recommended nutritional intake amounts are met. Patients should also be made aware of the importance of nutritional management and dietary therapy. During PD, patients must also pay attention to possible hypokalemia because potassium is excreted. Body composition measurements, residual renal function, and peritoneal function evaluation The optimal weight for PD patients, which corresponds to the dry weight in HD patients, is not always easily determined. Regular body composition measurements, weight check-ups, physical examinations, and imaging examinations are useful for evaluating body fluid balance and assuring that it is appropriately maintained. Patients who have excess body fluid levels should undergo salt intake evaluation and receive consultation, and the possible need for changes to the oral treatment and dialysis prescription should be considered. Evaluation of the residual renal function through 24-h urine collection analysis and/or evaluation of peritoneal function based on the peritoneal equilibration test (PET) is necessary for appropriately determining the dialysis efficiency, and such evaluations should be regularly conducted (every 6–12 months). Patients' PD prescription or dialysis method should be changed if decreased dialysis efficiency is suspected to be causing malnutrition. Dialysis maintenance period Self-monitoring recommendations based on continuous nutritional guidance Regular re-evaluations of patients' nutritional status should be conducted, and any abnormalities should be identified in the creation and implementation of nutritional screening, assessment, and planning. Continuous consultation that establishes plans based on these evaluations should be created and implemented until improvement is observed. Background factors in daily life, such as home and work environments, need to be considered when evaluating the observed improvements in dietary habits. The consultation should aim towards positive behavioral changes in the patients. Comprehensive nutritional management based on regular clinical dialysis teams The formation of a multidisciplinary team, including physicians, nurses, nutritionists, clinical engineers, physical therapists, psychologists, and social workers, can be useful for evaluating nutrition and implementing consultations. Nutritional management in pediatric patients Malnutrition has a serious impact on growth and psychomotor development in pediatric PD patients. Patients under the age of 2 with reduced oral intake should be actively considered for enteral nutrition either with nasogastric tube feeding or gastrostomy. Dietary content and physical measurement values, including height and growth rate, should be evaluated regularly. Focused nutrition interventions by a nutritionist are ideal. The recommended energy intake amount is equivalent to that in healthy children and is based on the "Japanese dietary reference intake." Protein intake should be sufficient to cover protein lost through PD, whereas excessive phosphorus intake should be avoided. Dietary salt replenishment is necessary in cases where Na is lost through urine due to congenital anomalies of the kidney and urinary tract or when a relatively large amount of ultrafiltration is necessary for an infant for their physical size and Na is lost through drainage. Fundamentals of nutritional management in pediatric PD patients The evaluation of nutritional status is critical in pediatric PD patients, with malnutrition having a severe impact on growth and psychomotor development [228, 229]. Reports have indicated that growth disorders not only preclude reaching a standard final height but also act as an independent risk factor for mortality [230, 231]. Significant growth disorders during infancy, the period with the highest growth rates, can render subsequent catching up of growth and development extremely difficult. Furthermore, growth during infancy has minimal dependence on growth hormones compared to other periods, so suitable nutritional management is essential from an early stage [232]. Decreased appetite and vomiting are observed in pediatric PD patients, making consistently sufficient nutritional intake quite challenging. This can result in decreased gastrointestinal motility, gastroesophageal reflux, cytokine generation accompanying end-stage kidney disease, and increased intra-abdominal pressure due to dialysis fluid retention [233, 234]. As such, patients with eating disorders, particularly those under the age of 2, need to consider forced nutrition or, in other words, enteral nutrition through nasogastric tubes or gastrostomas. Network registry reports of international PD, which compared nutritional management through oral intake, nasogastric tubes, and gastrostomy, showed that the duration of management with gastrostomy was significantly correlated to growth acquisition [235]. Peritonitis or surgical complication risks were present following the establishment of gastrostomy after the initiation of PD, and thus its establishment prior to or at the same time as PD initiation is considered ideal [236,237,238]. However, there are reports that gastrostomy placement after the initiation of PD does not increase complications [239]. Nutritional evaluation of pediatric PD patients A suitable nutritional status in children is defined as the state in which a suitable variety and quantity of food is taken into the body and normal growth is maintained [232, 240]. The KDOQI guidelines recommend periodically evaluating the dietary content and SD score of height, growth rate, BMI, and head circumference (under the age of 3) in all CKD patients, including those on dialysis (Table 1). Nutritional management should be provided under the guidance of a nutritionist who is trained in pediatric kidney disease-related nutrition and with the support of the patient, their guardians (e.g., parents), and a multidisciplinary team well-versed in pediatric kidney disease treatment (physicians, nurses, and social workers) [229]. Nutritional management in pediatric PD patients Table 1 Recommended nutritional evaluation criteria and evaluation intervals in pediatric peritoneal dialysis patients (modified from reference [229]) PD patients need to have a sufficient energy intake that is similar to patients at other stages of CKD and other healthy children [229, 241]. The intake amount is based on the "Japanese dietary reference intake," which is revised by the Japanese Ministry of Health, Labour and Welfare every 5 years (Table 2) [241, 242]. In actual practice, the patient starts with a set energy intake amount that corresponds to their physique and age, which is gradually increased if sufficient growth is not obtained [241]. Additional energy derived from dialysate glucose is estimated at 8–12 kcal/kg/day [232]. This is influenced by peritoneal permeability in actual practice, so regular glucose absorption amounts should also be evaluated. Table 2 Estimated required energy amount (kcal/day) (modified from reference [242]) The dietary reference intake amount for protein also needs to be considered. This needs to supplement the amount lost from PD, and the Japanese Society of Pediatric Dialysis indicates the recommended protein intake amount in PD children (Table 3) [243]. Care must also be taken to ensure that excess phosphorus loading is not applied to children, and a low-potassium or phosphorus formula (Meiji 8806H®) is efficacious for enteral nutrition in children with renal failure [241]. Table 3 Recommended protein intake amount in pediatric peritoneal dialysis patients (modified from reference [243]) Pediatric PD patients, particularly those with congenital anomalies of the kidney and urinary tract, tend to maintain urine control and lose Na through their urine, making dietary salt replenishment necessary. Furthermore, as Na is lost through ultrafiltration, hyponatremia can easily occur if a large amount of ultrafiltration is needed compared to the physical weight of the patient even if they have anuria (e.g., infants), making dietary salt replenishment also essential in this case [244]. However, the Na removal amount can vary according to dwell time, so care must be taken [245]. Breast milk and normal milk have low Na concentrations (6–8 mEq/L). Meiji 8806H milk (27 mEq Na/L with a standard 15% concentration) is efficacious. Meanwhile, patients with overhydration or hypertension need to exercise Na control [241]. Chapter 4 Peritoneal function There are several well-established methods for evaluating the peritoneal function, and each has its own unique characteristics. Among them, the PET is the most widely performed method. Regularly evaluating peritoneal function with standard PET or simplified methods (fast PET) is desirable. Extensive researches of biomarkers in the effluent to determine peritoneal conditions have become available. We expect that these are becoming, along with PET, the new methods for evaluating peritoneal function in the future. PD is a treatment method that utilizes the physiologic and anatomical properties of the peritoneum to perform blood purification. The peritoneal function in PD refers to the peritoneum conditions in each patient when performing dialysis therapy. When dialysate with a specific composition is retained in the abdominal cavity for a fixed period of time and then drained, the drainage volume and drainage solute composition vary according to the individual. These are due to the differing peritoneal solute permeability or water permeability in each individual. In this chapter, the peritoneal function is defined as the collective solute and water permeability of the peritoneum. The evaluation and comprehension of peritoneal function in each patient are critical for a proper PD prescription (e.g., determination of dwelling time, bag exchange frequency, PD fluid concentration/amount). The peritoneal function also changes over time with continued dialysis [246, 247]. There are three well-established methods for evaluating peritoneal function: (a) PET [248], (b) overall mass transfer area coefficient (MTAC) [249, 250], (c) and the software for peritoneal function analysis [251]. Although each method is unique and useful, no clinical trials have distinguished which of these methods is the best at managing patients on PD. Therefore, peritoneal evaluation methods should be selected according to the conditions of each facility and should be continuously analyzed. The most widely used evaluation method of peritoneal function is PET, the use of which is supported by previous research reports and clinical studies. The original techniques of PET were proposed by Twardowski et al. [248] and are used throughout the world. It is unique in that no specialized devices or software are required and comparisons between previous data or between patients can be conducted. In this method, 2.0 L of 2.5% glucose concentration dialysis fluid (or an equivalent osmotic dialysate) is used. After 2 and 4 h of dialysate infusion, the ratio of creatinine concentration in dialysis fluid (D) to plasma (P) (D/P Cr) and the ratio of glucose concentration in the dialysis fluid (D) to its initial concentration (D0) (D/D0 Glu) are calculated. The data of D/P Cr and D/D0 Glu are used to evaluate the removal efficiency of smaller solutes and ultrafiltration efficiency, respectively. The results of PET are plotted on a standard curve to classify patients into the following four categories: "High," "High Average," "Low Average," and "Low," in the order of transparency. PD prescriptions can be considered based on these results. In Europe, "high" can be mistaken for high solute removal, so this is referred to as "fast transporter," and "low" is referred to as "slow transporter" [252]. By measuring the D/P ratio of other measurable substances, the permeability of a substance with its molecular mass can also be evaluated. There are various modifications to the PET. The method that only evaluates data at the 4th hour of standard PET is referred to as the frequency and short-time PET (fast PET) [253]. The low rate of sample collection enables faster evaluation, but the results can be different from those of the original method, so care must be taken [254]. PET using 4.25% glucose dialysate instead of 2.5% is useful for diagnosing ultrafiltration failure and evaluating Na sieving [255, 256]. Refer to the appendix for the actual methods of PET and MTAC. PET should be conducted regularly or at least once a year to monitor the peritoneal function. According to 2016 reports by the Japanese Society for Dialysis Therapy [257], the percentage of PD patients (excluding combined therapy) who underwent PET, including fast PET, in Japan was approximately 64%. There were no differences between sex for the average D/P Cr ratio, with values of 0.67 and 0.65 for males and females, respectively. The D/P Cr ratio tended to increase with age. Analysis of the D/P Cr ratio by primary disease showed that diabetic nephropathy and renal sclerosis tended to be high compared to other diseases. Research based on PET There are many reports of factors that influence PET results. It is thought that cytokine production, peritoneal vascular distribution, and blood flow conditions in the abdominal cavity change immediately following catheter placement, causing unstable peritoneal permeability. Actual PET data are variable for up to 1 month following initiation [258]. For these reasons, the guidelines in the USA and Canada recommend that the first PET should be performed at least 1 month after the initiation of PD [259, 260]. PD-related peritonitis significantly affects peritoneal permeability. As a result, permeability increases and the ultrafiltration rate decreases [261]. However, these changes are thought to be transient and recover by 1 month after healing of peritonitis [262, 263]. Meanwhile, long-term peritoneal inflammation may lead to the progression of angiogenesis and fibrosis, which may affect peritoneal permeability [264]. Long-term PD periods also gradually enhance peritoneal permeability (D/P Cr) and decrease ultrafiltration performance. These changes are accelerated by exposure to high glucose concentration dialysate from an early stage [265]. The results of peritoneal biopsy in Japan showed that peritoneal thickness and angiopathy progressed with long-term PD and that groups with decreased ultrafiltration performance had increased peritoneal thickness [266], suggesting that there is a close relationship between decreased peritoneal function and structural changes in the peritoneum. Reports on the effect of icodextrin on peritoneal function have indicated that there is no statistically significant difference between it and groups who only use glucose solution [267, 268]. However, some reports indicated potentially worsened peritoneal function [269]. Therefore, further research is required. Many research reports that analyzed the differences in PET results using neutralized versus acidic dialysate indicated no differences in small molecule permeability and ultrafiltration capacity [270, 271]. Neutral dialysate has also been shown to have minimal effects on peritoneal permeability and morphology for over 3 years [272]. However, there have been reports that have indicated that neutral fluids decrease ultrafiltration performance [273, 274], and their effects are not constant. It is suggested that the biocompatibility of the dialysis solution has some impact on peritoneal function. Combined PD + HD therapy is commonly practiced in Japan, but some reports have indicated that peritoneal function changes with combined therapy [147, 275] and that D/P Cr has a decreasing tendency following the commencement of combined therapy. Numerous research reports have analyzed the relationship between peritoneal function and prognosis. Patient groups with increased peritoneal permeability and decreased ultrafiltration performance, or in other words, patient groups who fit in the "High" category for PET, tended to have poor prognoses [276,277,278,279]. Peritoneal permeability is generally dependent on the dialysis period, but some patients already have increased peritoneal permeability from the initial stages of dialysis initiation. Analyses on biopsies of the peritoneum at the time of initiation indicated a statistically significant correlation between macrophage invasion extent and peritoneal permeability [280]. A wide range of factors, including local inflammation in the peritoneum, race, age, sex, residual renal function, diabetes, and hypoalbuminemia, might contribute to baseline peritoneal permeability [265, 281, 282]. Recent studies have reported a relationship between peritoneal function and genetic polymorphisms (e.g., vascular endothelial growth factor (VEGF), interleukin-6 (IL-6), endothelial NO synthase, and receptors for advanced glycation end-products, RAS genes) [281, 283]. Therefore, furthering our understanding of indications of PD and patient prognosis may be possible at the genetic level. Research on biomarkers in effluent Research on biomarkers in effluent has been conducted, which has been useful for analyzing peritoneal function alongside PET. Measurements of the cancer antigen 125 (CA125) and IL-6 are particularly straightforward, so they have been conventionally used to evaluate peritoneal condition. CA125 is a 220-kDa glycoprotein produced in a peritoneal mesothelial cell, and its concentration in effluent is thought to reflect the amount of peritoneal mesothelial cells [284]. The level of CA125 is measured when conducting PET, which enables the evaluation of the mesothelial cell amount [285]. Reports have indicated that the concentration of CA125 in the effluent often tends to decrease with dialysis duration; this result corresponds with actual morphological findings [285]. Some facilities routinely conduct CA125 drainage measurements. Further evaluation is required to determine whether these serve as predictive or prognostic factors for EPS. IL-6 is a versatile cytokine produced from a variety of cell types, including T cells, activated macrophages or monocytes, and vascular endothelial cells. IL-6 concentration in the effluent is thought to reflect inflammation or pre-inflammation conditions, and many reports have indicated its effect on peritoneal permeability [286] or its changes over time [287, 288]. However, further research is necessary regarding its clinical application. Several types of research have also been conducted on the markers of peritoneal neoangiogenesis (VEGF, transforming growth factor β, tumor necrosis factor α), markers of endothelial dysfunction (E-selectin, vascular cell adhesion marker-1), markers of tissue fibrosis (IV-type collagen, plasminogen-activated control factor-1, CC chemokine ligand-18), and markers of tissue remodeling (matrix metalloprotease-2, hyaluronic acid). We anticipate their application to the increasingly accurate evaluation of peritoneal function in the future [289,290,291]. Peritoneal equilibrium test: standard method (PET) Inject 2000 mL 2.5% glucose dialysis solution (or equivalent) Immediately collect a dialysate sample (=CD(0)). Two hours later, collect a dialysate sample (=CD(120)) and a blood sample (=\( \overline{\mathrm{C}} \)B). Four hours later, collect a dialysate sample (=CD(240)) and drain the remaining fluid. For creatinine, 3 points are plotted on a calibration curve: CD(0) / \( \overline{\ \mathrm{C}} \)B, CD(120) /\( \overline{\mathrm{C}} \)B, CD(240) / \( \overline{\mathrm{C}} \)B. For glucose, 3 points are plotted on a calibration curve: 1.0, CD(120) / CD(0), CD(240) / CD(0). (CD(0) may be a theoretical value of 2.27 g/dL) (Fig. 1). If the creatinine and glucose results differ, prioritize the creatinine results. Peritoneal equilibrium test: simple method (fast PET) Four hours later, collect a dialysate sample (=CD(240)) and a blood sample (=\( \overline{\mathrm{C}} \)B), and drain the remaining fluid. For creatinine, plot CD(240) / \( \overline{\mathrm{C}} \)B on a calibration curve. For glucose, plot CD(240) / CD(0) on a calibration curve. (CD(0) may be a theoretical value of 2.27 g/dL) (Fig. 1). Calculating the total mass transfer area coefficient (MTAC) Calibration curves for peritoneal equilibrium test (modified based on references [292, 293]) The following procedure is applied when using the "Peritoneum equilibrium test: standard method" (2.5% glucose dialysis solution) described above. Therefore, MTAC is the result of retaining a 2.5% glucose dialysis solution for 4 h. MTAC can be calculated the same way when another dialysis solution is used or if the solution is retained for a different amount of time. Inject 2000 mL 2.5% glucose dialysis solution (or equivalent). Record the amount injected as VD (0). Perform the standard peritoneal equilibrium test (see A-1 to A-4 above). Record the amount of drainage as VD(t) when draining. Calculate MTAC with t = 240 min and assuming the mean dialysis solution \( \overline{\mathrm{V}} \)D = VD (t) [249, 250]. $$ \mathrm{MTAC}=\mathrm{K}0\mathrm{A}=-\frac{{\overline{V}}_D}{t}\ \ln \left[{\left\{\frac{{\mathrm{V}}_{\mathrm{D}}\left(\mathrm{t}\right)}{{\mathrm{V}}_{\mathrm{D}}(0)}\right\}}^{\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.}\frac{C_D(t)-{\overline{C}}_B}{C_D(O)-{\overline{C}}_B}\right] $$ Peritoneal function in pediatric patients PET for children in Japan is standardized, and evaluation standards have been established for children. PET methods are the same for both children and adults, but the recommended injection amount is 1100 mL/m2 for the former. PET-based evaluations of peritoneal function should be conducted regularly. Standardization of PET in children and implementation methods PET is also widely used for the evaluation of peritoneal function in pediatric patients. PET implementation methods were standardized in children by Warady et al. in 1996 [294]. Adults receive 2 L injections (from Twardowski et al. [295]), so, considering their body surface area, a corresponding injection volume of 1100 mL/m2 was designated for children. As a result, there are virtually no differences in creatinine and glucose permeability according to age, and a classification that uses the same categories as that in adults was established [294]. PET in Japan was standardized by Kaku et al. using data from 175 patients, which produced a classification that uses virtually identical categories as those reported in Twardowski et al. and Warady et al. [296]. The PET evaluation standards in Japanese children (value at 240 min of retention) are shown in Tables 4 and 5 [296, 297]. PET methods follow those used in adults. However, please refer to the section on PET in the "Pediatric PD Treatment Manual" for more detail on standardized PET methods in Japanese children [297]. PET requires a 4-h period, so a short PET method wherein the dialysis fluid concentration (2.5%) and the injection amount are kept the same and the retention period is set at 2 h is recommended. These reports classify the 2-h and 4-h retention periods into the same categories, and its utility has been indicated [298, 299]. Research on PET-based peritoneal function Relationship between peritoneal function and age, dialysis duration, and peritonitis Table 4 PET evaluation standards in children (D/D0 Glu ratio) [297] Table 5 PET evaluation standards in children (D/P Cr ratio) [297] Studies that implemented PET over time showed no changes in the PET categories from PD commencement to up to 2 years [300, 301]. However, reports have indicated that the D/P Cr ratio increased and the D/D0 Glu decreased from the second year onwards [302]. Analyses of 93 PETs in 20 patients by Iwata et al. showed that PD duration had a positive and negative relationship with the D/P Cr ratio and D/D0 Glu, respectively; each regression line showed that PET fell into the "High" category after about 6 years [303]. Concerning age, infants tended to have high peritoneal permeability [294, 304]. Reports have indicated that if there is a medical history of peritonitis, peritoneal permeability is high and the PET categories become higher [301, 305]. Enhanced peritoneal permeability is correlated to ultrafiltration failure, which is a key symptom that commonly occurs in EPS [306, 307]. Therefore, PET-based evaluations of the peritoneal function should be regularly conducted. Influence of neutral dialysis fluid on peritoneal function Section "Relationship between peritoneal function and age, dialysis duration, and peritonitis" focuses on evaluations with acidic dialysis fluid, but there are no reports that clearly show how neutral dialysis fluid affects PET among children. However, analyses that compared neutral dialysis fluids, which used bicarbonate and acidic dialysis fluids containing lactic acid, showed that there were no differences in the PET results. However, patients treated with neutral dialysis fluids had statistically significantly higher serum bicarbonate concentrations and dialysate CA125 concentrations, reflecting the amount of peritoneal mesothelial cells, which were two times greater than those treated with acidic dialysis fluids [308]. Reports have also indicated that neutral dialysis fluid has higher free water transport [309]. Additionally, it has been reported that neutral dialysis fluids that contain bicarbonate and not lactic acid have a higher ultrafiltration volume [310]. Further accumulation of data is needed in the future. Chapter 5 Discontinuation of peritoneal dialysis to avoid encapsulating peritoneal sclerosis PD should be discontinued in patients undergoing long-term PD or where peritoneal deterioration is suspected following intractable peritonitis to mitigate the risk of EPS development. To perform PET regularly is recommended to evaluate peritoneal deterioration. The Japanese Society for Dialysis Therapy in their 2009 Peritoneal Dialysis Guidelines (GL) presented the above two points as conditions for discontinuation in order to avoid EPS [66]. These same points are also used in the revised GL. Peritoneal deterioration is a comprehensive concept that considers decreased peritoneal function and morphological changes of the peritoneum. Ultrafiltration failure and enhanced peritoneal permeability are characteristics of decreased peritoneal function. Morphological changes of the peritoneum refer to phenomena that can be identified with a peritoneal laparoscope, pathological peritoneal findings, and cytology of mesothelial cells in drainage fluid. A GL public awareness survey conducted in 2011 following the publication of the Peritoneal Dialysis Guidelines in 2009 [66] revealed that the GL relating to EPS were widely recognized, with over 90% of facilities conducting PET and over 80% of facilities referring to the PD discontinuation of the GL [311]. For these reasons, this chapter takes over the 2009 PD-GL as a basis and provides explanations using subsequent evidence gathered since that time. Peritoneal deterioration and EPS Epidemiology of acidic dialysis fluid The incidence of EPS in Japanese PD patients has been reported to be between 0.9 and 2.4% [312,313,314,315]. EPS development is primarily associated with peritoneal deterioration, which is attributed to underlying factors such as diabetes, age, uremic toxins, drugs, peritonitis, and various biological stimulants endogenously present in PD treatment systems. Additionally, it is thought that its severity generally increases over time on PD. In this context, the effects of the bio-incompatibility of PD fluid have been a crucial issue. Causes of peritoneal deterioration include acidity, lactic acid, high osmolality, high glucose, and glucose degradation products [316, 317]. The long-term chronic diseases comprehensive research project chronic renal failure research team of the Japanese Ministry of Health, Labour and Welfare, a research team studying the evaluation and application of CAPD treatment, in their 1997 paper, "Sclerosing Encapsulating Peritonitis (SEP) Diagnosis /Treatment Guidelines (Proposed)," outlined the "CAPD discontinuation standard guidelines for avoiding SEP (EPS)." [318] Decreased peritoneal function (ultrafiltration failure), peritonitis, and PD duration (over 8 years) were defined as risk factors of SEP (EPS). The ISPD presented an international definition of this disease entity and changed the term from SEP to EPS to avoid the misunderstanding that peritonitis is a prerequisite for EPS development [316]. Subsequent Japanese prospective observational studies that examined patients on acidic fluids revealed that the incidence of EPS was 2.5% and 3.18/1000 patients/year, with the incidence increasing by PD duration, being 0%, 0.7%, 2.1%, 5.9%, 5.8%, and 17.2% in patients having undergone PD for 3, 5, 8, 10, 15, and > 15 years, respectively [314]. The Scottish Renal Registry also showed that the median 5-year incidence of EPS was 2.8% and 13.6/1000 patients, indicating that EPS onsets at earlier stages than those in Japan [319, 320]. In conclusion, PD duration is related to EPS risk, but it is thought that EPS development cannot be avoided completely even if PD duration is limited. Influence of neutral dialysis fluid and current issues Dialysis fluid in Japan today has improved, and neutral dialysis fluid with reduced glucose degradation products have been the standard solution in Japan. In the observational study (NEXT-PD), the incidence of EPS was reportedly reduced [315]. The NEXT-PD study employed virtually the same protocol as the prospective observational study conducted with acidic solutions [314], where the incidence of EPS was 1.0% and 2.3/1000 patients/year in patients on neutral solution, with majority displaying mild clinical symptoms. Furthermore, histological examinations of the parietal peritoneum revealed that patients on neutral dialysis fluid have minimal peritoneal blood vessel degeneration [321,322,323]. These results suggest that EPS risk decreases with neutral dialysis fluid use. At present, it is not uncommon to limit PD duration in order to avoid EPS development in Japan. However, this limit was empirically established based only on clinical experiences during the periods of acidic fluid. For this reason, there is no medical rationale for setting a time limit on PD in patients on neutral solution in the clinical setting. Meanwhile, the relationship between EPS development and peritonitis is thought to be important [312], but its influence is not the same between short-term and long-term PD. There is a possibility that even a singular episode of peritonitis during long-term cases can serve as the trigger for EPS development [324, 325]. Additionally, decreased peritoneal function, PD duration, and the number of peritonitis incidents mutually correlate, but the independence of each factor to EPS risk has not been sufficiently elucidated [326]. Regarding the influence of kidney transplantation, it has been reported that there are cases in which EPS developed after kidney transplantation [327, 328], while it has been reported that there are cases showing EPS remission with immunosuppressant use [329]. No definitive remarks can be made regarding kidney transplantation and EPS risk at present. In conclusion, it is essential to consider the risks of EPS in each patient and monitor them over time to avoid EPS. Decision methods of peritoneal deterioration and issues Peritoneal deterioration Peritoneal deterioration comprehensively refers to changes in peritoneal function and morphology due to PD treatment [330]. The characteristics of decreased peritoneal function are ultrafiltration failure and increased peritoneal permeability. Ultrafiltration volume using 4.25% glucose PD solution and Na sieving confirmation is recommended for determining ultrafiltration failure. In Japan, the clinical indicator is an ultrafiltration volume less than 500 mL, while using 2.5% of glucose PD solution (2 L) 4 times a day [318]. Peritoneal permeability is confirmed with an increased D/P creatinine ratio (D/P Cr), which is calculated using PET. Laparoscopy [331,332,333], (parietal) peritoneum biopsy [323, 334], and mesothelial cell cytology in drainage fluid [335] are conducted for evaluating peritoneal morphology. Clinical efficiency of surrogate markers in drainage fluid has been reported as markers for peritoneal deterioration (e.g., CA125, hyaluronic acid, matrix metalloprotease-2 [MMP-2], IL-6, VEGF, coagulation/fibrinolytic factors, and Na sieving) [336,337,338,339,340,341]. There are reports indicating that increased blood β2 microglobulin [342] and genetic polymorphisms of receptors for advanced glycation endproducts [343] contribute to EPS risk. However, these surrogates are not independent, and there are correlations between histological changes, mesothelial cell cytology, D/P Cr, and circulating factors [344,345,346,347,348]. EPS and peritoneal deterioration Correlations have been reported between EPS development and increased D/P Cr [349,350,351,352], mesothelial cell surface area [352], MMP-2 drainage [338, 353], and IL-6 drainage [354]. PET-based peritoneal function determination is not only non-invasive but also objective, simple, and cost-efficient. The ISPD Position Paper [355] underlined peritoneal permeability with a focus on PET results. Reportedly, many EPS cases presented increased peritoneal permeability, but long-term PD patients had increased permeability as well, and thus it is not thought to be a distinct prognosis factor for EPS. Clearly, D/P Cr is insufficient for predicting EPS onset with a single examination, but it provides data about change of peritoneal function over time [350, 351]. Patients whose D/R Cr increases over time and are categorized as "High" for over 12 months should signify advanced peritoneal deterioration, and the discontinuation of PD therapy must be considered. In this regard, ideally, PET should be conducted at least once a year, and changes in D/P Cr should be monitored. Meanwhile, mesothelial cell cytology is reportedly correlated to EPS risk, and its clinical utility has been demonstrated [352] in predicting EPS development, but there are issues in its sensitivity and specificity as a predictive factor. The same issue has remained unsolved in the clinical application of surrogate markers in PD drainage fluid. For this reason, a diagnosis of peritoneal degeneration should be based on the comprehensive and multiple findings of the clinical markers that are currently available. A total of 70% of EPS cases developed this disease after PD withdrawal in Japan when acidic solutions were used as standard solution [314]. Therefore, changes within the abdominal cavity after PD withdrawal are thought to be clinically important. There may be some clinical value in retaining the PD catheter for a fixed period of time even after PD withdrawal to observe the characteristics of the drainage fluid and changes in peritoneal function in cases of long-term PD therapy or in cases of suspected peritoneal deterioration. Direct laparoscopic examination is useful in confirming peritoneal deterioration and EPS. However, this procedure carries a risk of infectious peritonitis as well. Additionally, there is no as to the issue whether the so-called peritoneal lavage benefits patients by preventing EPS development after PD discontinuation [352, 356, 357]. Looking at the current information, peritoneal lavage should only be conducted in cases with a high clinical risk of EPS. Recognition and current status of EPS Onset pattern The detachment and loss of peritoneal mesothelial cells due to long-term exposure to PD fluid trigger peritoneal fibrosis with an exaggerated peritoneal thickness (deterioration). Moreover, capillary vessels in the peritoneum present hyaline degeneration and lumen narrowing, which increases peritoneal permeability (first hit). In cases of a complicating inflammatory condition (i.e., bacterial peritonitis, or some other unknown factor), there is neoangiogenesis of capillary vessels in the peritoneal membrane. This furthermore increases peritoneal permeability even for macromolecules such as albumin and fibrin, causing fibrin-rich membrane formation over the original thickened peritoneal membrane (two-hit theory) [358, 359]. These findings have also been observed in histological analyses of peritoneal tissue [323, 360]. Clinical symptoms appear when the hard fibrin membrane extends to cover the entire intestine, restricting intestinal motility. The fibrin membrane is continuously formed from the parietal to the visceral peritoneum side, often accompanying ascites inside. Its condition can be easily confirmed with abdominal computed tomography (CT). However, the severity of peritoneal deterioration does not always correlate with the formation of an encapsulating membrane, as this can be formed with even mild inflammation in cases of advanced peritoneal deterioration. On the other hand, EPS can occur with sustained inflammation even in cases with absent or mild peritoneal deterioration. The balance between deterioration and inflammation of the peritoneal membrane is important; this is why EPS can occur in patients even with a relatively short PD duration. Furthermore, patients with severe peritoneal fibrosis or secondary hyperparathyroidism can experience diffuse calcium deposition between the encapsulating membrane and degenerated peritoneum, potentially resulting in intestinal obstruction. For the diagnosis, EPS was defined in 1997 as a "syndrome that represents continuous, intermittent, or repeated ileus symptoms due to wide-ranging adhesion of the diffusely-thickened peritoneum" [318]. Intestinal obstruction symptoms occur due to the formation of an encapsulating membrane that restricts intestinal motility. Symptoms can improve with conservative support, such as temporary fasting, but these often relapse after a few months. Clinically, EPS can be diagnosed if the periods between relapses become shorter. Abdominal CT is recommended as a supplemental tool for diagnosis [361, 362]. Lastly, it is not recommended use the term "pre-EPS" for the stage prior to EPS onset. Pharmacological agents (corticosteroids, tamoxifen) and surgical therapy are currently used in the management of EPS. Regarding drug therapy for EPS, corticoids are considered the first choice for treatment in Japan [363]. This drug prevents ascites accumulation and fibrin precipitation through the mechanism of suppressing inflammation. For this reason, corticosteroids should be given immediately at the onset of EPS, and timely tapering of the agents according to inflammatory status is crucial. However, there are limited reports on the use of corticosteroids for EPS in countries outside Japan, and there is also no international consensus on the choice of corticosteroids as the first-line treatment for EPS [355]. On the other hand, there are no reports regarding the usage of the estrogen receptor modulator, tamoxifen, in Japan, with this drug primarily being used in Europe. It is thought to prevent peritoneal fibrosis by regulating the expression of the gene for fibrosis, suppressing mesothelial-mesenchymal transformation of mesothelial cells, and enhancing the removal of degenerated collagen [364, 365]. The Dutch EPS Registry research found that groups using tamoxifen have significantly improved survival rates [366]. A Dutch EPS GL has also released a treatment algorithm based on these results, which includes a step of drug therapy (steroids, Tamoxifen) and timing of surgical therapy [367]. However, the literature on drug therapy mostly consists of case series or small-scale patient research, and no definitive conclusions can be made regarding its clinical efficacy. The British National Institute for Health and Care Excellence (NICE)-GL [368] indicated that there is no clear evidence regarding the efficacy of drug therapy, leaving its use to the discretion of the physician. Surgical therapy was initially contraindicated [369], but reports in Japan have indicated favorable results with enterolysis [358, 370, 371]. NICE-GL indicated that surgical treatment should be implemented at an early stage by experienced teams for established EPS cases. Statistical surveys by the Japanese Society for Dialysis Therapy (JRDS) also reported that 79.5% of patients with a medical history of EPS had undergone some form of surgical therapy [363]. Reports from Japan [371], the UK [372], Germany [373], and the Netherlands [374] have reported a mortality rate of between 32 and 35%, with a more favorable performance than conservative therapy [366, 375, 376]. It is thought that treatment performed by surgical teams that are experienced with EPS is clinically valid. Future of EPS EPS onset rates have decreased with the initiation of neutral dialysis fluid in Japan. However, some patients develop EPS after a relatively short period of time on PD, and peritonitis is thought to contribute to many of these cases. It is essential to establish a treatment flow process that prevents peritonitis and minimizes inflammation and to establish a preventative treatment method to further control EPS in the future. Discontinuation of PD to avoid EPS in pediatric patients Long-term PD therapy has a higher risk of developing EPS, so the risks and benefits of continuing treatment need to be considered. Unnecessary long-term treatment should be avoided. PET should be regularly conducted to evaluate peritoneal function. EPS can occur even after changing treatment from PD to HD or transplantation. Follow-up is necessary, and the abdominal symptoms of EPS should be kept in mind. PD is the main treatment of renal replacement therapy in children, and sufficient care must be taken to consider the risks of EPS. PD duration Reports in three pediatric registries have indicated that EPS onset is more likely with longer PD duration. The earliest reported registry data in Japan (1981–1999, 843 patients) indicated 17 EPS cases (prevalence of 2.0%), and this was equivalent to the incidence in adults at the time [377, 378]. An analysis of PD treatment duration found that a longer duration resulted in a higher incidence of EPS, with durations longer than 5, 8, and 10 years at incidence rates of 6.6%, 12.0%, and 22%, respectively [377, 378]. Registries of 14 European pediatric dialysis facilities (EPDWG) [379] reported EPS onset in 22 out of 1472 patients from 2001 to 2010 (prevalence of 1.5%, 8.7/1000 people/year). Statistically significant differences in PD treatment duration were observed between EPS and non-EPS patients (p < 0.00001), with average durations of 5.9 (1.6–10.2) and 1.7 (0.7–7.7) years, respectively [379]. Italian registries from 1986 to 2011 [380] reported EPS onset in 14 out of 712 patients (prevalence of 1.9%), of which 11 underwent PD treatment for over 5 years. These three registries suggest that prevalence is between 1.5 and 2.0% and that EPS risk increases with a longer PD treatment duration. Dialysis fluid types The NEXT-PD study data on Japanese adults reported that EPS onset decreased with neutral dialysate [315]. All patients in pediatric registries in Japan were in periods when acidic dialysate was used and thus cannot be compared. In the EPDWG, 5 out of 22 patients used neutral dialysate through the entire PD treatment period, and the EPS onset frequency was analyzed in relation to differences in the dialysate (i.e., acidic vs. neutral dialysate). However, the results indicated no statistical differences (p = 0.8) [379], and the decreased frequency due to neutral dialysate has, at the very least, not been confirmed in children. Ultrafiltration failure, peritoneal equilibration test (PET) categories Patients with EPS had a high frequency of ultrafiltration failure at the time of EPS onset (Japan 76% [377], Europe 88% [379]), and all cases were categorized as "high" by PET [379]. Evaluations of peritoneal function, such as the presence of ultrafiltration failure, should be regularly tested with PET. Meanwhile, the number of patients who were categorized as "high" by PET multiple times prior to EPS onset were as follows: 3 out of 6 patients in the Japanese registry [377], 0 out of 3 patients in the European registry [379], and 4 out of 4 patients in the Italian registry [380]. Care must be taken as EPS onset can occur even if PET does not classify the patient into the "high transporter" category. Peritonitis The EPDWG reported that peritonitis in EPS patients is significantly higher than in non-EPS patients (p = 0.02), with average frequencies of 1.9 (0.9–3.1) and 0.72 (0.3–1.2) times per year for EPS and non-EPS patients, respectively [315]. However, no significant differences were observed in the registries of either Japan (EPS 0.44 times/year; non-EPS 0.42 times/year) [377] or Italy (EPS 0.45 times/year; non-EPS 0.42 times/year) [380]. However, EPS onset was observed immediately after peritonitis in 9 out of 17 patients with EPS in Japan [378]. The EPDWG also reported that the frequency of peritonitis in patients with EPS is four times higher than that in patients reported in other registries. Thus, EPS onset needs to be considered in patients who frequently suffer from peritonitis or when peritonitis develops with deteriorated peritoneum due to long-term PD. Time of onset EPS onset often occurs during PD treatment (64–77%) [377, 379, 380], but it can also occur after transitioning to HD (14–29%) [378,379,380] or after transplantation (9–21%) [379, 380]. EPS onset is common in the first year after transitioning to HD [380], but there are also reports of pediatric cases where EPS onset occurred 8 years after transplantation [381], so care must be taken in clinical symptoms which suggest EPS even after PD therapy. Predictive factors EPS is challenging to diagnose prior to the onset of clinical symptoms. The value of PET data has been discussed in the above sections. There are data regarding the use of CT scans in adults, but EPS can occur even if the CT performed within a year shows no abnormalities [362]. Therefore, CT is not suitable as a screening tool [382]. Reports of peritoneum biopsies in patients who had undergone PD for over 5 years or were clinically suspected of having EPS indicated that patients with no EPS symptoms could continue PD treatment if mesothelial detachment in the peritoneum and stenoses in the lumen of the arteriole were not present [383] However, conducting routine peritoneal biopsy may be challenging in the decision of whether to continue PD treatment [382]. Mortality was 17% in Japan [378], 13% (average of 4.8 years in duration after diagnosis) in EPDWG [379], and 43% in Italy [380]. It is surmised that reports in Japan were lower than those in Italy because Japan does not include transplant patients, which are likely to have more severe prognoses [380]. However, the difference in prognoses between the Italian registry and other registries is unknown. Both the Japan and EPDWG data displayed a more favorable prognosis in pediatric EPS cases than in adult cases [379]. Chapter 6 Peritonitis management Epidemiology and incidence of PD-related peritonitis Causes of infection and definition of peritonitis PD-related peritonitis results in decreased peritoneal function, catheter removal, transfer to HD, progression to EPS, and death. Therefore, its prevention and/or early-stage treatment is vital. Peritonitis is classified according to the cause (infection pathway) into exogenous and endogenous infections. Patients that have had peritonitis and experience recurrence after it was healed are classified as recurrent if the recurrence occurred within 4 weeks. Peritonitis can also be classified as recurrent, relapsing, repeat, refractory, or as catheter-related peritonitis. PD-related peritonitis in PD patients causes decreased ultrafiltration capacity or peritoneal dysfunction. It is a major problem as it can lead to catheter removal, transfer to HD, progression to EPS, and death; this further highlights the importance of prevention and/or early-stage treatment [384,385,386,387,388]. There are various infection pathways, which can be classified as follows: Exogenous infection Transcatheter infection due to operational error of bag exchange Paracatheter infection due to subcutaneous tunnel infection spread from exit-site infection Infection by the catheter at the time of its insertion Endogenous infection Transintestinal infection via bacterial migration due to diverticulitis Hematogenous infection Infection through the vagina Miscellaneous infection such as intraperitoneal abscesses The infection typically occurs due to touch contamination or spread of exit-site/tunnel infection. Other endogenous infections include spread of intestinal infection (cholecystitis, appendicitis, ruptured diverticulum, severe diarrhea, intestinal perforation, ileus, incarcerated hernias) and hematogenous infections [389,390,391,392,393,394]. Peritonitis includes recurrent peritonitis, relapsing peritonitis, repeat peritonitis, refractory peritonitis, and catheter-related peritonitis, which are defined as follows [395]: Recurrent: An episode that occurs within 4 weeks of completion of therapy of a prior episode but with a different organism Relapsing: An episode that occurs within 4 weeks of completion of therapy of a prior episode with the same organism or one sterile episode Repeat: An episode that occurs more than 4 weeks after completion of therapy of a prior episode with the same organism Refractory: Failure of the effluent to clear after 5 days of appropriate antibiotics Catheter-related peritonitis: Peritonitis in conjunction with an exit-site or tunnel infection with the same organism or one site sterile The incidence of peritonitis should be expressed as the number of patients in a given year. The incidence of peritonitis in Japan is low, at 0.21–0.24 patients/year. The incidence of peritonitis has been reported as one episode per number of patient-month of treatment. However, the ISPD Committee has recommended to report the incidence as number of episodes per patient-year [395]. Points of caution include counting from the first date of starting PD training (at the time of PD start), counting relapsing peritonitis (peritonitis that occurs due to the same microbe within 4 weeks after the end of peritonitis treatment) as one time, and counting peritonitis that occurred when medical staff exchanged bags during hospitalization as one time. In addition to the calculations of the overall incidence of peritonitis, statistics according to the causative organism and its antibiotic sensitivity also need to be considered [391]. The tendency of the causative organism may vary according to the facility, and countermeasures need to be prepared for high-incidence facilities. Therefore, it is important to know the infection tendencies at each facility [395]. ISPD guidelines have advised a target incidence of peritonitis of no higher than 0.67/patient/year and an ideal incidence of less than 0.36/patient/year until 2010 [396]. However, they stated, in 2016, that it should not exceed 0.5/patient/year [395]. There have been numerous reports on the incidence of peritonitis in recent years, and reports often indicate an incidence of around 0.18–0.5/patient/year [397,398,399,400,401,402,403,404,405,406,407,408,409,410,411,412]. There appear to be no differences according to the primary disease, even with lupus nephritis or polycystic kidney disease [413, 414]. A retrospective study on the incidence of peritonitis in Japan in 1996 reported a value of 0.23/patient/year [415]. The Japanese Society for Dialysis Therapy indicated an incidence of 0.21–0.24/patient/year with no large variations observed from 2012 to 2015, suggesting that the management of peritonitis is favorable in Japan. A report in 2015 indicated that the incidence of peritonitis was rather high among males and tended to increase with age. No correlations were observed between the incidence of peritonitis and history of dialysis. The incidence of peritonitis according to the primary disease was reported to increase in cases of nephrosclerosis or diabetic nephropathy. As just described, the incidence of peritonitis in Japan is low and well-managed. However, peritonitis remains the primary cause of PD withdrawal, and the development of continuous preventative measures is anticipated in the future [385]. Diabetes and obesity are considered risk factors for peritonitis, but the dialysis method-based effects are unclear. Risk factors for PD-related peritonitis include diabetes, obesity, ethnicity, climate, and depression [416]. Reports have also indicated that PD catheter morphology and Y-bags can control the incidence of peritonitis after PD [417, 418]. No differences in the incidence have been observed between APD and CAPD [419]. Causative organisms Causative organisms in Japan often include gram-positive cocci (commonly coagulase-negative staphylococci). Peritonitis due to mycobacteria, fungi, or anaerobic bacteria is refractory, and catheter removal rates are high. The most common causative organism is coagulase-negative staphylococci [412]. Reports from Japan indicated that 42.7% were gram-positive cocci, of which the most common was Staphylococcus at 21.5% [387]. Research on 6639 patients from 2003 to 2008 in Australia indicated that gram-positive and negative cocci accounted for 53.4% and 23.6%, respectively. The report indicated that mycobacterial and fungal infections were rare, but catheter removal rates were high in conjunction with anaerobic bacteria [420]. There were various causative organisms in each facility, and it is, therefore, important to know their frequencies at each facility [395]. PD-related peritonitis symptoms and diagnosis Peritonitis symptoms The primary symptoms of peritonitis include abdominal pain and/or cloudy dialysis effluent. Cloudy effluent can appear even without abdominal pain, so daily observations of dialysis effluent are important. Many patients with peritonitis report symptoms of abdominal pain and cloudy dialysis effluent. Reports indicate a prevalence of abdominal pain in 80% of patients, fever over 37.5 °C in 30%, nausea and/or vomiting in 50%, cloudy effluent in 80%, and hypotension in 20% [392]. Patients with peritonitis usually have abdominal pain and rebound tenderness without having a board-like abdomen. Abdominal pain and cloudy effluent do not necessarily have to occur simultaneously, and cloudy effluent may occur after a delay. Cloudy effluent is a symptom that can often be tied to diseases other than peritonitis, but it is often associated with PD-related peritonitis [421]. Diagnosis can be considerably delayed if the patient has no abdominal pain and cloudy effluent is not recognized, so daily observations of dialysis effluent are important. Diagnosis of peritonitis Peritonitis is diagnosed when at least two of the following are observed: (a) abdominal pain and/or cloudy dialysis effluent, (b) white blood cell count in dialysate effluent of above 100/μL or above 0.1 × 10 [9]/L (after a dwell time of at least 2 h) with a polynuclear leukocyte count of over 50%, and (c) positive dialysis effluent culture. Peritonitis is diagnosed in APD patients if the neutrophil percentage is over 50%, even with a leukocyte count below 100/μL. If peritonitis is suspected, it is recommended that the dialysate be drained, the external appearance of the effluent be carefully observed, and the effluent be submitted for cell count (including differentiation), Gram staining, and culture tests [422]. The ISPD guidelines indicate that PD patients with cloudy dialysis effluent should be assumed to have peritonitis and recommend to continue treatment for peritonitis until the diagnosis is confirmed or excluded [395]. Following proposals by the ISPD in 2016 [395], the criteria for diagnosis of peritonitis currently used throughout the world are as noted below [423, 424]. A diagnosis of peritonitis is recommended if at least two of the following clinical criteria are met [395]: Clinical signs of peritonitis: abdominal pain and/or cloudy dialysis effluent White blood cell count in dialysate effluent of above 100/μL or 0.1 × 10 [9]/L (after a dwell time of at least 2 h), and a polynuclear leukocyte percentage of at least 50% [425]. Positive dialysis effluent culture Physicians should consider the patient's symptoms, presence of contamination during recent PD fluid exchange, opportunities for bacterial contamination such as unexpected PD connection problems or tube cutting, upper/lower endoscopy or gynecologic procedure, diarrhea or constipation, history of peritonitis, and past or current exit-site infections. The tunnel or exit sites of catheters should be carefully observed and actively checked for any pus discharge. Cultures of any discharge should be taken. Typical physical findings include abdominal pain, which spreads throughout the abdominal area and, at times, is accompanied by a muscular defense. Patients with endogenous peritonitis often have systematic illnesses such as sepsis. Therefore, those with localized pain/tenderness or pus discharge within their dialysis effluent need to be carefully examined for surgical etiologies such as intraabdominal abscesses. Abdominal X-rays and blood cultures are not essential in cases of standard PD-related peritonitis, but these should be conducted if sepsis due to endogenous infection is suspected from clinical symptoms such as those outlined above. Meanwhile, APD patients often do not recognize cloudy effluent as white blood cell count in dialysate effluent is influenced in part by the dwell time. For these reasons, percentages of polymorphonuclear cells in the dialysate effluent, in along with confirmation of cloudy dialysis effluent, are used for diagnosing peritonitis rather than absolute white blood cell count in patients on APD with rapid cycle treatment. If the percentage of polymorphonuclear cells is over 50%, the patient is diagnosed with peritonitis even if the white blood cell count is below 100/μL [425]. In patients on APD without any dialysate dwell during the daytime, 1 L of dialysis solution should be infused, dwelled for 1–2 h, and then drained for further examination. Following a diagnosis of peritonitis, the current episode must be classified as either recurrent, relapsing, or repeat peritonitis. This is because the incidences of recurrent and relapsing peritonitis are 14% and 5%, respectively, and both are considered risk factors for catheter removal or permanent HD transfer [426]. Repeat peritonitis occurs at an incidence of around 10%, and the most common form of onset is that which occurs within 2 months after resolution of the last peritonitis by antibacterial drug therapy [427, 428]. Methods for analyzing causative organisms The identification of causative organisms is essential for determining the causes of infection, selecting antibacterial drugs, and preparing subsequent preventative measures. Blood culture bottles should be used for the bacterial cultures of PD effluent. Culturing the effluent after centrifugal separation not only increases the bacteria detection rate but also reduces the time to culture positivity. The ISPD guidelines [395] recommend the following with regard to the identification of causative organisms: Use of blood culture bottles for the bacterial cultures of PD effluent. The sample collection and culture methods should be reviewed and revised in the facilities where the culture-negative rate is above 15%. Cultures of PD effluent allow for the selection of appropriate antibacterial drugs by identifying the pathogen and testing antibacterial drug sensitivity. Depending on the pathogen identified, it can also indicate specific infection sources. Gram staining of PD effluent often yields a negative result, but it should still be conducted considering its contribution to early-stage antibacterial drug selection and administration if it turns out positive [429]. Gram staining positivity increases by a factor of 5 to 10 by conducting centrifugal separation for 15 min with 50 mL of dialysis effluent at 3000g, suspending with 3–5 mL of physiological saline solution, and planting them in a solid medium or standard blood medium afterwards [430]. Favorable sensitivity is obtained by injecting 5–10 mL of PD effluent into a blood culture bottle (aerobic/anaerobic), and the culture negativity rate is typically around 10–20% when these methods are used [431, 432]. The detectable strains can vary according to whether the method is based on dissolution in water, or the addition of surfactants such as Tween-80 and Triton-X and culture in blood agar. Reports have indicated that the combination of these two methods increased positivity [433]. Samples should ideally be delivered to laboratories within 6 h, but if this is difficult, the blood culture bottles with the PD effluent injected should be cultured at 37 °C. Culturing of the solid medium should be done in aerobic, microaerophilic, and anaerobic environments. The time taken until pathogens are identified is crucial. The previously mentioned centrifugal sedimentation treatment not only increases the bacteria detection rate but also reduces the time until culture positivity. Microbiological diagnoses can be confirmed within 3 days for over 75% of patients. If the culture is negative even 3–5 days after commencement, the cell count and differentiation of the PD effluent should be re-measured and a fungal/mycobacterial culture should be conducted alongside the confirmation of treatment reactivity. Furthermore, conducting subcultures for 3–4 days under aerobic, microaerophilic, and anaerobic culture conditions can detect slow-developing bacteria or yeast, which would not be detected using automatic culture systems. New detection methods Various early diagnosis techniques have been reported, but none have been established as superior to conventional methods. There are various early diagnosis techniques currently being proposed. Leukocyte esterase reagent strips [434], biomarker assays [435], and the polymerase chain reaction (PCR) analysis of bacterially derived DNA fragments [436, 437] were proposed in the 2000s. Reports in the 2010s used bacterial 16S rRNA gene sequence analysis [438], matrix-assisted laser desorption/ionization time-of-flight mass spectrometer (MALDI-TOF) [439], and pathogen-specific local immune fingerprints [440, 441]. However, there is no evidence that any of these techniques are superior to conventional methods. PD-related peritonitis treatment Empiric treatment should be promptly started with antibacterial drugs once samples have been collected for identification. Empiric treatment should include 1st-generation cephalosporin administration for Gram-positive bacteria and 3rd-generation cephalosporins or aminoglycosides for Gram-negative bacteria. Vancomycin should be administered for methicillin-resistant Staphylococcus aureus (MRSA). Treatment should be switched to a suitable antibacterial drug, and an appropriate treatment duration should be implemented once culture results and sensitivity are determined. Impacts on disease severity and peritoneal function can be reduced by beginning antibacterial drug treatment for peritonitis as soon after sample collection as possible. An antibacterial drug that can cover Gram-positive or negative bacteria needs to be selected as an empiric treatment prior to determining the causative organism. The effectiveness of treatment with 1st-generation cephalosporin or quinolone alone is less than 70% [442]. Antibacterial drugs used for Gram-positive bacteria include 1st-generation cephalosporin or glycopeptides (vancomycin or teicoplanin). Comparisons between 1st-generation cephalosporin and vancomycin have suggested that the latter has superior performance with regard to peritonitis resolution rate, hospitalization rate, and superinfection [443]. However, some reports have also indicated that there were no significant differences in the resolution rate [444]. A meta-analysis showed that glycopeptides are superior to 1st-generation cephalosporins, but reports of the former, which used glycopeptides [443], indicated that this might largely be due to the insufficient dose of cefazolin used [416]. ISPD guidelines recommend that facilities with high methicillin-resistant bacteria detection rates use vancomycin [395, 445], but its usage is covered by health insurance only for MRSA infections, so facilities use it for these purposes instead. Antibacterial drugs for Gram-negative bacteria include 3rd-generation cephalosporins and aminoglycosides. Other effective drugs include cefepime and carbapenem. Quinolone is efficacious in regions where resistance against it is not advanced. There have been reports on various combinations of these drugs. Studies that compared combined ceftazidime + cefazolin treatment and combined aminoglycoside (netilmicin) + cefazolin showed no significant differences [446]. There were no significant differences between combined cefazolin + aminoglycoside (netilmicin) treatment and combined vancomycin + ceftazidime treatment either [447]. There were also no significant differences in the reactivity or resolution rate between cefepime treatment and combined vancomycin + aminoglycoside (netilmicin) treatment [448]. There were no significant differences in the reactivity between combined carbapenem (meropenem) + aminoglycoside (tobramycin) treatment and combined carbapenem (meropenem) + vancomycin treatment [449]. There were no significant differences in effectiveness between combined ceftazidime + cefazolin treatment and carbapenem (imipenem/cilastatin) treatment either [450]. Reports have also examined the effectiveness of quinolone. There were no significant differences in the effectiveness of quinolone (ofloxacin) treatment and combined cephalothin + aminoglycoside (tobramycin) treatment [451]. However, reports have indicated that S. aureus elimination was slow with quinolone (ciprofloxacin) [452]. There were no significant differences in effectiveness between combined quinolone (pefloxacin) + vancomycin treatment and aminoglycoside (gentamicin) + vancomycin treatment [453]. There were no significant differences in effectiveness between combined quinolone (levofloxacin) + vancomycin treatment and combined aminoglycoside (netilmicin) + vancomycin treatment [454]. However, combined vancomycin + quinolone (ciprofloxacin) treatment [455] and combined cefazolin + quinolone (ciprofloxacin) treatment [456] were shown to be effective. Patients with allergies to cephalosporin can use aztreonam as a replacement. Combined aztreonam + cefuroxime treatment was found to be effective [457]. Reports have not indicated decreased residual renal function with short-term aminoglycoside treatment [458, 459], but long-term (i.e., over 3 weeks) or repeated use should be avoided as it can increase the risk of developing a hearing disorder [460]. Regarding antibacterial drug administration pathways, drug concentrations within the abdominal cavity are higher with intra-abdominal administration compared to intravenous administration, so ISPD guidelines recommend the former. The current systematic review results are as indicated in CQ 5 (note: intra-abdominal administration is currently not covered by health insurance). Reports have indicated that cefazolin was retained for 6 h in the abdominal cavity and that suitable blood and intra-abdominal concentrations were maintained for 24 h [461]. Recent reports have conducted detailed analyses on intravenous and intra-abdominal meropenem administration. The results indicated that the blood concentrations were equivalent but that concentrations in the dialysis fluid were lower with intravenous administration [462]. There are continuous and intermittent (once/day) administrations of antibacterial drugs in the abdominal cavity, but the latter is generally used. Intermittent administration requires at least 6 h of retention in the abdominal cavity for sufficient absorption [463]. Vancomycin, aminoglycosides, and cephalosporin can all be added into the same dialysis fluid, but aminoglycosides and penicillin cannot be added due to their incompatibility [464]. Please refer to Tables 6 and 7 for antibacterial drug doses. Table 6 Recommended intraperitoneal anti-microbial agent dose for peritonitis treatment Table 7 Recommended whole body anti-microbial agent dose for peritonitis treatment Analyses have also been done on antibacterial drug stability in the dialysis fluid of antibiotics. The combination of cefazolin and ceftazidime in glucose-based PD fluid and icodextrin-based dialysis fluid was stable for 24 h and 7 days at 37 °C and 4 °C, respectively [465]. Other reports have indicated that this combination was stable for 14 days at 4 °C [466]. These results suggest that 4 C is optimal when storing for more than 1 day. Patients undergoing APD may have insufficient intra-abdominal concentrations of the antibacterial drug due to frequent exchange because of the cycler, so temporarily switching from APD to CAPD should be considered. Reports have indicated that there were no significant differences in the recurrence rate, mortality, and catheter removal between APD and CAPD for peritonitis treatment. However, an increased leukocyte count period and longer antibacterial drug treatment period were observed with APD [467]. Many patients showed clinical improvements within 48 h of empiric treatment of generic peritonitis. Cell count and bacterial cultures in the drainage fluid need to be re-evaluated in patients who did not improve. Reports have indicated that drainage fluid leukocyte counts above 1090/mm [3] after the 3rd day of treatment was an independent predictive factor for failed treatment [468]. Treatment according to causative organism Coagulase-negative Staphylococci Intra-abdominal administrations of cephalosporin-based antibacterial drugs (vancomycin with resistant bacteria) are to be given for 2 weeks. Coagulase-negative staphylococci (CNS) are normal bacteria found on the skin and include Staphylococcus epidermidis. Touch contamination is the cause of many cases. Symptoms are usually not severe, and reactions to antibacterial drugs are favorable [469]. Vancomycin is used in patients with methicillin resistance. Reports on 232 patients with CNS-induced peritonitis indicated that the initial-stage reaction rate was 95.3% in a hospital in Hong Kong, with 49.5% being methicillin-resistant bacteria [470]. Reports on 65 patients with CNS-induced peritonitis indicated that 58.5% were cephalosporin-resistant [471]. Generally, 2 weeks of antibacterial drug treatment were sufficient for the patients in these reports. Vancomycin use was a prognosis factor for methicillin-resistant bacteria [472]. Relapsing peritonitis can occur when a biofilm is formed on a PD catheter, and catheter replacement is necessary for such cases [472]. Antibacterial drugs should be administered for 3 weeks. S. aureus-based peritonitis can occur with touch contamination, exit-site infection, and tunnel infection. Intra-abdominal administrations of 1st-generation cephalosporins should be done if the patient is sensitive, and vancomycin should be used if MRSA is present. Studies that compared administrations of cefazolin and vancomycin as an initial-stage treatment in 503 patients with S. aureus-based peritonitis showed no significant differences in resolution rate between the two groups [473]. The presence of MRSA was an independent risk factor for HD transition [472]. Similarly, reports on 245 cases with S. aureus indicated no significant differences in the resolution rate [474]. These reports indicated that combined rifampicin therapy reduced relapsing and recurrent peritonitis risk but that care must be taken for bacterial resistance and drug interactions induced by long-term administration [473]. Reports have indicated that administrations of teicoplanin and daptomycin were efficacious as replacement drugs [475, 476]. A treatment period of 3 weeks is ideal [473, 474]. Peritonitis induced by S. aureus due to catheter infection is refractory, and many cases require catheter removal [477]. Intra-abdominal administrations of vancomycin should be conducted for 3 weeks. Additional administrations of aminoglycoside-based antibacterial drugs should be considered for severe cases. Enterococcus-based peritonitis is often accompanied by intense stomach pain and can often become severe. Enterococcus is a normal bacterial flora in the intestinal canal, and the causes of peritonitis may not only be due to touch contamination or catheter-related infection but can also be related to the abdominal cavity. Reports on 116 cases of Enterococcus-based peritonitis indicated increased catheter removal, HD transition, and mortality rates with polymicrobial detection [478]. Furthermore, reports have indicated that HD transitions can be reduced by conducting catheter removal within 1 week. Enterococcus is typically cephalosporin-resistant, but pediatric research reported that cephalosporin was efficacious as an initial-stage treatment [479]. Intra-abdominal administration of vancomycin is recommended if the patient is sensitive to vancomycin. Severe cases should consider additional administration of aminoglycoside-based antibacterial drugs. For vancomycin-resistant Enterococcus (VRE), intra-abdominal administrations of ampicillin should be conducted if the patient is sensitive to ampicillin. Reports have indicated that linezolid [480], quinupristin/dalfopristin [481], and daptomycin [482] were efficacious if the bacterial infection is resistant to ampicillin. Antibacterial drugs should be continuously administered for 2 weeks. Streptococcus is often orally derived and usually reacts favorably to antibacterial drugs. Reports on 256 patients with Streptococcus-induced peritonitis indicated that the intra-abdominal administration of 1st-generation cephalosporin or vancomycin had a lower recurrence, catheter removal, and mortality related to other strains [483]. Reports have indicated that Streptococcus viridans, a normal bacterial flora of the oral cavity, has weak reactions to antibacterial drugs, and are highly recurrent [484, 485]. Corynebacterium Treatment is conducted for 3 weeks with antibacterial drugs. Corynebacterium are normal bacterial flora on the skin, but peritonitis due to these bacteria is rare. Analyses of 27 patients with corynebacterial peritonitis indicated that 13 patients were recurrent, of which 8 were healed after 3 weeks of vancomycin administration [486]. Analyses of 82 patients with corynebacterial peritonitis indicated that the outcome of 2 weeks of vancomycin treatment was as follows: relapse in 18%, recurrence in 15%, catheter removal in 21%, HD transition in 15%, and death in 2% [487]. Two types of antibacterial drugs with different mechanisms should be administered for 3 weeks. Many patients with catheter infections will need to have their catheters removed. Pseudomonas aeruginosa-induced peritonitis is severe, and many patients with catheter infections will need to have them removed [488]. Analyses of 104 patients with Pseudomonas aeruginosa-induced peritonitis showed that 45.2% had exit-site infections and that the initial-stage reaction rate was 60.6%, while the complete resolution rate was 22.1%. Furthermore, groups that used 3rd-generation cephalosporin had significantly more favorable reactivity than those who used aminoglycosides [489]. Analyses of 191 patients with Pseudomonas aeruginosa-induced peritonitis showed that catheter removal and HD transitions were significantly higher than in those of other bacterial strains [490]. Empiric treatment results showed no differences, but subsequent use of two antibacterial drug types for Pseudomonas aeruginosa could reduce HD transitions, and the mortality rate was lower with catheter removal than with singular antibacterial drug therapy [488]. Antibacterial drugs with two different treatment mechanisms need to be selected. A combination of either intra-abdominal administration of gentamicin or oral administration of ciprofloxacin and intra-abdominal administration of either ceftazidime or cefepime should be conducted. Carbapenem-based drugs should be administered for Pseudomonas aeruginosa, which is resistant to cephalosporin or penicillin [395]. Other Gram-negative bacteria The causes of peritonitis due to gram-negative bacteria other than Pseudomonas aeruginosa are said to be due to touch contamination, exit-site infections, constipation, and colitis. Antibacterial drugs should be selected based on infectivity, safety, and simplicity in cases where these were singularly detected. Analyses in Australia indicated that 23.3% of all peritonitis patients were from causes other than Pseudomonas aeruginosa, a Gram-negative bacterium, with the most common being Escherichia coli, but this also included Klebsiella, Enterobacter, Serratia, Acinetobacter, Proteus, and Citrobacter. Additionally, a quarter of the cases had multiple bacteria types present [491]. Analyses of 210 patients with enterobacterial peritonitis indicated that 111 were due to E. coli, with an initial reaction rate of 84.8% and a resolution rate of 58.1%. Of these, 39% did not respond to singular antibacterial drug treatment, where in vitro sensitivity was confirmed, and a second antibacterial drug was added. Patients who were administered a second antibacterial drug had only a slightly lower risk of relapse and recurrence than those who were administered only one antibacterial [492]. Antibacterials at times showed no effect on patients who exhibited biofilm formation, even if they were sensitive [493]. Extended-spectrum beta-lactamases (ESBLs) have increased in recent years [494]. ESBLs are resistant to all cephalosporin-based antibacterial drugs, but they are typically sensitive to carbapenem-based antibacterial drugs [50, 494]. Carbapenem-resistant enterobacteria have also increased. They usually are resistant to all beta-lactam derivative antibacterial drugs and fluoroquinolone, but they variably respond to aminoglycosides, with sensitivity observed for polymyxin and colistin [495, 496]. Although rare, Stenotrophomonas respond to only a few antibacterial drugs [497]. Reports have indicated that even if improvements are seen with Stenotrophomonas-based peritonitis, two types of efficacious antibacterial drugs should be administered for 3–4 weeks. Other efficacious treatments include oral administrations of trimethoprim/sulfamethoxazole (ST compound), tigecycline, polymyxin B, and colistin. Polymicrobial peritonitis The necessity for surgical intervention should be quickly evaluated when polymicrobial enterobacterial infection is detected. Antibacterial drug therapy should be continued for 3 weeks where polymicrobial Gram-positive bacterial infection is detected. Reports have indicated that bacterial strains in polymicrobial peritonitis were combinations of S. epidermidis and either coagulase-negative Staphylococci, Klebsiella, and Enterococci or Escherichia coli and Klebsiella. Patients with chronic respiratory disease more commonly have polymicrobial infections than infections from a single bacterial infection, and the former is associated with a higher risk of hospitalization, catheter removal, HD transition, and death [478, 498,499,500]. Analyses of 140 patients with peritonitis in Japan indicated that 19 (13.5%) had polymicrobial infections [398]. Antibacterial drug therapy for polymicrobial peritonitis (including those due to Enterococci) often include cefazolin, vancomycin, gentamicin, and ceftazidime as a primary regimen, with the addition of vancomycin, gentamicin, 3rd-generation cephem antibiotics, carbapenem, or anti-fungal agents as a secondary regimen [478], Analyses of protocol utility on bacteria from the intestinal canal for peritonitis indicated that 20–24.9% were polymicrobial. Observational studies also indicated that conducting three steps of (1) suspending PD for 1 week without removing the catheter, (2) injecting intravenous meropenem (0.5 g/day), and (3) retaining meropenem in the catheter (dilute 0.125 g in 25 mL of physiological saline solution and retain) was significantly more efficacious for polymicrobial infections than the intra-abdominal administration of gentamicin (20 mg/L, once per day) and rifampicin (50 mg/L, every session) [394, 501]. However, intra-abdominal rifampicin administration is not common in Japan. The primary antibacterial drug regimen for peritonitis in Japan is often cefazolin and ceftazidime, and 75% of peritonitis patients undergo a therapy with two types of antibacterial drugs [502]. In conclusion, treatment should begin with the primary regimen set up in each facility, with vancomycin, aminoglycosides, new quinolones, and anti-fungal agents added based on the bacteria identification or sensitivity results. Antibacterial drug therapy should be conducted for 3 weeks. The abdominal cavity should be considered as the cause of illness if multiple enterobacteria were cultured from the PD drainage fluid. Patients with hypotension, septicemia, lactic acid acidosis, or drainage fluid amylase concentration increases have an increased possibility of serious issues in the abdominal cavity [503, 504]. Enhanced CT-based imaging should be promptly conducted, and the necessity of surgical treatment, including ventrotomy, should be discussed. When necessary, surgical procedures should be performed immediately. An antibacterial drug with anaerobic coverage should be given. Meanwhile, favorable progress is observed when the cause of peritonitis is due to multiple Gram-positive bacteria [498, 505]. Culture-negative peritonitis The 2016 ISPD guidelines recommend that the incidence of culture-negative peritonitis should be kept below 15%. Catheter removal should actively be considered if the infection is not sufficiently resolved after 5 days of antibacterial drug therapy. The 2016 ISPD guidelines recommend to keep the incidence of culture-negative peritonitis below 15% [395]. Culture-negative peritonitis is common among female patients with diabetes or those within 3 months of starting PD. Reports have indicated that many patients use antibacterial drugs prior to onset compared to those with culture-positive peritonitis [506, 507]. Culture-negative peritonitis can be healed with antibacterial drug therapy alone and has low rates of hospitalization, catheter removal, HD transition, and death [506]. However, recurrent culture-negative peritonitis often involves catheter removal [506]. Reports of culture-negative rates range from 10 to 32% [384, 391, 398, 508,509,510]. Catheter removal was conducted in 103 out of 808 patients with culture-negative peritonitis [420]. Culture-negative peritonitis rates with biocompatible neutral PD fluid were comparable with those using conventional dialysis fluid [511]. Analyses of PD-related peritonitis in Japan in a 1-year period in 2013 indicated a culture-negative rate of 23.4% [502]. Out of 120 patients with culture-negative peritonitis, 15 had suspended PD [502]. Concentrated culture methods had lower culture-negative rates than other methods [512]. Reports have indicated cases with Paracoccus yeei and Mycobacterium abscessus even if the PD peritonitis was culture-negative at first, and multiple cultures were required due to the possibility of non-conventional microbes [513, 514]. Many reports have indicated treatment using vancomycin or cefazolin in addition to gentamicin for culture-negative peritonitis [515, 516]. Antibacterial drugs are used for 14 days as an initial-stage treatment for culture-negative peritonitis [517, 518]. Antibacterial drugs such as cefazolin and ceftazidime are often used as a primary regimen in Japan, and 75% of treatments consist of two types of antibacterial drugs [502]. Catheter removal should actively be considered if the infection is not sufficiently resolved after 5 days of empiric antibacterial drug therapy. Fungal peritonitis The catheter should be promptly removed in patients with fungal peritonitis. Anti-fungal agents should be administered continuously for 2 weeks after catheter removal. Fungal peritonitis is a serious complication that has a high risk of catheter removal, HD transition, and mortality [420, 519]. Fungal peritonitis is rare among PD-related peritonitis cases, and its incidence is reported to be 2.6–3.1% [420, 500]. Reports have indicated that fungal peritonitis is common in tropical regions or in the summer and autumn months [423, 520, 521]. Initial-stage treatment generally involves a combination of amphotericin B and flucytosine. However, the intra-abdominal administration of amphotericin B can cause chemical peritonitis and pain due to chemical stimulation. Meanwhile, intravenous administration does not allow for favorable migration properties to the peritoneum. Frequent monitoring of flucytosine concentration in the blood is essential when using this drug to avoid myelosuppression. Serum flucytosine concentration peaks are to be measured 1–2 h after oral administration and controlled to 25–50 μg/mL [522]. Out of 10 patients with non-Candida fungal peritonitis, 9 had their catheters removed and received either fluconazole or itraconazole [523]. Preventative administration of two types of anti-fungal agents has been shown to be effective [524,525,526]. Other selections for anti-fungal agents include fluconazole, echinocandin-based anti-fungal agents, posaconazole, and voriconazole. Fluconazole is widely used, but azole-type anti-fungal agent-resistant cases have been increasing [527]. Fluconazole is effective for only Candida and Cryptococcus. Echinocandin-based anti-fungal agents are effective for Aspergillus and Candida (except for Candida albicans). Additionally, fluconazole should be administered to patients who do not respond to other forms of anti-fungal agent therapy [528,529,530]. Caspofungin is effective as a singular therapy or in combination with amphotericin [528, 529]. Posaconazole and voriconazole are effective in treating peritonitis induced by filamentous fungi [531,532,533]. Voriconazole has been shown to be effective for Cryptococcus [534]. Combined micafungin, voriconazole, amphotericin B, and flucytosine therapy has been shown to be effective for peritonitis induced by Candida albicans [535]. Observational studies have shown that the rapid removal of the catheter regardless of the selected anti-fungal agent was likely to improve outcomes and reduce mortality [530, 532, 533, 536,537,538]. Anti-fungal agents should be continued for a minimum of 2 weeks after catheter removal. Recent research results have reported that a third of patients were able to return to PD [539]. Tuberculous peritonitis Basic treatment comprises a combination of isoniazid, rifampicin, ethambutol, and pyrazinamide. Tuberculous peritonitis should be suspected in all patients who have culture-negative refractory or relapsing peritonitis. This is similar to bacterial peritonitis, and initial-stage findings of almost all tuberculous peritonitis cases exhibit polynuclear leukocytes in the PD drainage fluid. However, increased lymphocytes in the dialysis drainage fluid typically become evident in later stages. Ziehl-Neelsen staining of the PD drainage fluid is frequently negative, and there are not enough positive rates because typical culturing methods are too slow. A liquid medium can significantly reduce the time until positive culture results are obtained. This applies to all diagnoses, but placing the precipitates, which can be obtained by centrifugally separating a large volume of drainage fluid (50–100 mL), into a solid medium and liquid medium and culturing them, increases the positivity rate. A separate method involves conducting mycobacterial DNA PCR tests with drainage fluid, but false-positives are not uncommon [540]. If this is suspected, peritoneum or omentum biopsies with a laparoscope should be conducted for a rapid diagnosis [541]. Standard treatment for patients with tuberculous peritonitis comprises combined isoniazid, rifampicin, ethambutol, and pyrazinamide therapy. In cases where pyrazinamide cannot be used, the three other drugs will instead be combined for treatment [542, 543]. Protocols where ofloxacin is added are also standard. Previous reports have indicated that rifampicin concentrations in PD drainage fluid were often low [544]. For these reasons, some countries recommend intra-abdominal administration of rifampicin, as in the ISPD guidelines, but this is not standard practice in Japan. Generally, pyrazinamide, ethambutol, and ofloxacin can be suspended after 2 months, whereas rifampicin and isoniazid administration should be continued for 12–18 months [542, 544,545,546,547,548,549,550]. Pyridoxine (50–100 mg/day) should be administered to avoid neurotoxicity due to isoniazid. Meanwhile, the long-term administration of pyridoxine at high doses (e.g., at 200 mg/day) can also induce neurotoxicity and should be avoided. Even if streptomycin is used at a lower dose, its long-term use can result in auditory nerve toxicity. Ethambutol has high risks for dialysis patients and can result in optic nerve inflammation. For these reasons, this must be used at the lowest dose possible. Prior reports have indicated that administering 15 mg/kg every 48 h or three times a week for 2 months is ideal [451]. There are some cases where treatment is conducted without removing the catheter, but over half of cases had it removed [420, 544,545,546,547]. (XII) Nontuberculous mycobacterial peritonitis Nontuberculous mycobacterial peritonitis is treated with multiple antibacterial drugs, including amikacin and clarithromycin. Mycobacterial peritonitis comprises 0.3–1.3% of all peritonitis cases [420, 500]. Treatment often involves clarithromycin and amikacin [551]. Many include Mycobacterium fortuitum, M. chelonae, and M. abscessus [552, 553]. M. abscessus is common in Asia, and reports have indicated treatment with amikacin, clarithromycin, meropenem, and cefmetazole [514, 554, 555]. Reports for M. iranicum treatment include levofloxacin, clarithromycin, imipenem, and minocycline [556]. The local application of gentamicin ointment to exit-site infections may render the exit site more susceptible to nontuberculous mycobacterial infection [557]. Treatment regimens for nontuberculous mycobacteria have not been sufficiently established, so individual protocols based on sensitivity trial results are needed. Furthermore, treatment periods are not fixed, but these have been found to range between 6 and 52 weeks [555, 557]. Catheter removal is typically needed, and there are limited reports of treatment without removal [552, 554]. (XIII) Catheter removal and re-insertion The PD catheter should be removed for refractory, relapsing, and fungal peritonitis if there are no clinical contraindications. The fundamental principle is not to preserve the catheter but "how best to protect the peritoneum." If re-insertion of a new catheter is being considered after removing the previous PD catheter in patients with refractory, relapsing, or fungal peritonitis, there should be at least a 2-week interval from the time that the catheter was removed in order to allow for the full recovery from the peritonitis symptoms. The applications for catheter removal are summarized in Table 8. The re-insertion of a new PD catheter shortly after catheter removal is not ideal for patients with refractory or fungal peritonitis. Patients should temporarily manage their symptoms with HD. Observational studies have indicated that antibacterial drugs should be continued for a minimum of 2 weeks after catheter removal for refractory peritonitis [558, 559]. Approximately 50% of patients were able to return to PD even after serious peritonitis onset [558,559,560]. There are virtually no data on the optimal time period between catheter removal and new catheter re-insertion, but observational studies have indicated that this should be at least 2–3 weeks [558, 559, 561, 562]. Patients with fungal peritonitis should have a more extended time until re-insertion [530, 536]. Peritonitis prevention Catheter implantation Table 8 Catheter removal applications Antibacterial drugs should be administered to prevent peritonitis immediately before catheter implantation. Antibacterial drugs are administered to prevent peritonitis immediately before catheter implantation [563, 564]. Four RCTs have compared a group that had pre-surgery intravenous administrations of cefuroxime [565], gentamicin [566, 567], vancomycin [389], and cefazolin [389, 567] and a group that had no treatment. Three out of these four RCTs found that the pre-surgical administration of antibacterial drugs reduced incidence of early-stage peritonitis [389, 566, 567]. Meanwhile, a report indicated that cefazolin and gentamicin usage had no efficacy whatsoever [567]. There is a single research study that compared vancomycin and cefazolin on a one-on-one basis [389]. This indicated that vancomycin was more efficacious than cefazolin. There is a systematic review of four trials that evaluated the efficacy of preventative intravenous injections of antibacterial drugs in the perioperative period [526]. This review indicated that 1st-generation cephalosporins had slightly inferior efficacy than vancomycin but that cephalosporins are still generally used to avoid vancomycin resistivity. Reviews of antibacterial drug-resistant strains in each facility indicated that preventative antibacterial drugs should be used for each PD program. Some data suggests that designated inspections on the presence of intranasal Staphylococcus aureus prior to catheter insertion and sterilization (e.g., intranasal administration of mupirocin) are effective in preventing exit-site and tunnel infections. However, no data indicates that this is efficacious in preventing peritonitis [526]. There are a variety of methods regarding catheter implantation besides preventative antibacterial drug administration. There are four RCTs that compared implantation based on laparoscope use and conventional implantation based on ventrotomy [568,569,570,571]. The first trial indicated that the incidence of early-stage peritonitis was significantly lower when laparoscopes were used for implantation [568]. However, these results were not observed in the other three RCTs [569,570,571]. Systematic reviews indicated that the above implantation methods had no significant effect on the incidence of peritonitis [572]. There are two reports on insertions from median and lateral incisions [573, 574]. However, neither reported significant differences in the incidence of peritonitis. Several research studies have been conducted on methods to embed a catheter subcutaneously for 4–6 weeks [16, 575, 576]. The first prospective trial reported reduced incidence of peritonitis compared to conventional methods [16]. Of the next two RCTs conducted, one indicated that the embedded method led to a lower incidence of peritonitis [575]. However, the other trial indicated no significant differences [576]. A retrospective trial on a swan-neck catheter embedded between the presternal area and abdominal wall indicated no significant differences in the incidence [577]. In conclusion, there is no clear data that suggests that prior embedding reduces the incidence of peritonitis.
CommonCrawl
Home » Statistics » Probability Distributions » Discrete Uniform Distribution 1 Discrete Uniform Distribution 3 Graph of discrete uniform distribution 4 Mean and Variance 5 M.G.F. of Uniform Distribution 6 General discrete uniform distribution 7 Mean of General discrete uniform distribution 8 Variance of General discrete uniform distribution 9 Distribution Function of General discrete uniform distribution If a random variable $X$ can take $N$ different values with equal probability, then we say that it has a discrete uniform distribution. It is also known as discrete rectangular distribution. A discrete random variable $X$ is said to have a discrete uniform distribution over the range $[1,N]$, if its probability mass function is $$ \begin{equation*} P(X=x)=\left\{ \begin{array}{ll} \frac{1}{N}, & \hbox{$x=1,2,\cdots, N$;} \\ 0, & \hbox{Otherwise.} \end{array} \right. \end{equation*} $$ Graph of discrete uniform distribution The graph of discrete uniform distribution with $a=1$ and $b=6$ is as follows: discrete-uniform-dist-pmf Mean and Variance The mean of uniform distribution is $$ \begin{eqnarray*} E(X) &=& \frac{1}{N}\sum_{x=1}^N x \\ &=& \frac{1}{N}\frac{N(N+1)}{2} = \frac{(N+1)}{2}. \end{eqnarray*} $$ Let us calculate $E(X^2)$ to find the variance of discrete uniform distribution. $$ \begin{eqnarray*} E(X^2) &=& \frac{1}{N}\sum_{x=1}^N x^2 \\ &=& \frac{1}{N}\frac{N(N+1)(2N+1)}{6} = \frac{(N+1)(2N+1)}{6}. \end{eqnarray*} $$ Hence, the variance of uniform distribution is $$ \begin{eqnarray*} V(X) &=& E(X^2) - [E(X)]^2 \\ &=& \frac{(N+1)(2N+1)}{6}-\frac{(N+1)^2}{4}\\ &=& \frac{(N+1)(N-1)}{12}. \end{eqnarray*} $$ M.G.F. of Uniform Distribution The m.g.f. of discrete uniform distribution is $$ \begin{eqnarray*} M_X(t)&=& E(e^{tX})\\ &=& \frac{1}{N} \sum_{x=1}^N e^{tx} \\ &=& \frac{e^t(1-e^{Nt})}{N(1-e^t)}. \end{eqnarray*} $$ General discrete uniform distribution A general discrete uniform distribution has a probability mass function $$ \begin{aligned} P(X=x)&=\frac{1}{b-a+1},\;\; x=a,a+1,a+2, \cdots, b. \end{aligned} $$ Mean of General discrete uniform distribution The expected value of above discrete uniform random variable is $E(X) =\dfrac{a+b}{2}$. Variance of General discrete uniform distribution The variance of above discrete uniform random variable is $V(X) = \dfrac{(b-a+1)^2-1}{12}$. Distribution Function of General discrete uniform distribution The distribution function of general discrete uniform distribution is $F(x) = P(X\leq x)=\frac{x-a+1}{b-a+1}; a\leq x\leq b$. In this tutorial, you learned about theory of discrete uniform distribution like the probability mass function, mean, variance, moment generating function of discrete uniform distribution. You also learned about general discrete uniform distribution. To read more about the step by step examples and calculator for discrete uniform distribution refer the link Discrete Uniform Distribution Calculator with Examples. This tutorial will help you to understand how to calculate mean, variance of discrete uniform distribution and you will learn how to calculate probabilities and cumulative probabilities for discrete uniform distribution with the help of step by step examples. To learn more about other discrete probability distributions, please refer to the following tutorial: Let me know in the comments if you have any questions on Discrete Uniform Distribution and your thought on this article. Categories Probability Distributions, Statistics Tags discrete distributions, discrete uniform distribution, uniform distribution p-value calculator for t-test with examples
CommonCrawl
Printer-friendly CSS, and nonfirstorderisability 27 August, 2007 in admin, expository, math.LO | Tags: CSS, first-order logic, quantifiers I recently discovered a CSS hack which automatically makes wordpress pages friendlier to print (stripping out the sidebar, header, footer, and comments), and installed it on this blog (in response to an emailed suggestion). There should be no visible changes unless one "print previews" the page. In order to prevent this post from being totally devoid of mathematical content, I'll mention that I recently came across the phenomenon of nonfirstorderisability in mathematical logic: there are perfectly meaningful and useful statements in mathematics which cannot be phrased within the confines of first order logic (combined with the language of set theory, or any other standard mathematical theory); one must use a more powerful language such as second order logic instead. This phenomenon is very well known among logicians, but I hadn't learned about it until very recently, and had naively assumed that first order logic sufficed for "everyday" usage of mathematics. Let's begin with some simple examples of statements which can be expressed in first-order logic. If B(x,y) is a binary relation on two objects x, y, then we can express the statement For every x, there exists a y depending on x such that B(x,y) is true in first order logic as whereas the statement For every x, there exists a y independent of x such that B(x,y) is true can be expressed as Moving on to a more complicated example, if Q(x,x',y,y') is a quaternary relation on four objects x,x',y,y', then we can express the statement For every x and x', there exists a y depending only on x and a y' depending on x and x' such that Q(x,x',y,y') is true (note that this allows y' to depend on y also, but this turns out to be moot, because y depends only on x), and one can similarly express For every x and x', there exists a y depending on x and x' and a y' depending only on x' such that Q(x,x',y,y') is true but it seems that one cannot express For every x and x', there exists a y depending only on x and a y' depending only on x' such that Q(x,x',y,y') is true (*) in first order logic! For instance, the statement To every finitely generated real vector space V one can associate a unique non-negative integer such that V, W are isomorphic if and only if ; an injection from V to W exists if and only if ; a surjection from V to W exists if and only if ; for all V, W, which is part of the fundamental theorem of linear algebra, does not seem to be expressible as stated in first order set theory (though of course the concept of dimension can be explicitly constructed within this language), even if we drop the uniqueness and restrict ourselves to just the assertion that obey, say, property 1, so that we get an assertion of the form (*). Note that the category of all finite-dimensional vector spaces is not a set (for reasons relating to Russell's paradox) and so we cannot view as a function. More generally, many statements in category theory dealing with large categories seem to not be expressible in first order logic. I can't quite show that (*) is not expressible in first-order logic, but I can come very close, using non-standard analysis. The statement For every real numbers x and x' there exists real numbers st(x) and st(x') depending only on x and x' respectively, such that st(x+x') = st(x)+st(x'), st(xx') = st(x) st(x'), st(1)=1, and st(x) is non-negative whenever x is non-negative, and also such that st(x) is not always equal to x is true in the non-standard model of the real numbers, but false in the standard model (this is the classic algebra homework problem that the only order-preserving field homomorphism on the reals is the identity). Since the transfer principle ensures that all first-order statements that are true in the standard reals are also true in the non-standard reals, this means that the above statement cannot be expressed in first-order logic. If it weren't for the "st(x) is not always equal to x" part, this would basically be of the form (*). It seems to me that first order logic is limited by the linear (and thus totally ordered) nature of its sentences; every new variable that is introduced must be allowed to depend on all the previous variables introduced to the left of that variable. This does not fully capture all of the dependency trees of variables which one deals with in mathematics. In analysis, we tend to get around this by using English phrasings such as … assuming N is chosen sufficiently large depending on , and chosen sufficiently small depending on N … … where C can depend on k and d, but is uniform with respect to n and f …, or by using the tremendously convenient O() and o() notation of Landau. One then takes for granted that one can eventually unwind all these phrasings to get back to a sentence in formal, first-order logic. As far as analysis is concerned, this is a fairly safe assumption, since one usually deals with objects in very concrete sets such as the real numbers, and one can easily model all of these dependencies using functions from concrete sets to other concrete sets if necessary. (Also, the hierarchy of magnitudes in analysis does often tend to be rather linearly ordered.) But some subtleties may appear when one deals with large categories, such as the category of sets, groups, or vector spaces (though in most applications, one can cap the cardinality of these objects and then one can represent these categories up to equivalence by an actual set). It may be that a more diagrammatic language (perhaps analogous to the commutative diagrams in category theory, or one based on trees or partially ordered sets rather than linearly ordered ones) may be a closer fit to expressing the way one actually thinks about how variables interact with each other. (Second-order logic is, of course, an obvious candidate for such a language, but it may be overqualified for the task. And, in practice, there's nothing wrong with just using plain old mathematical English.) [Update, Aug 27: bad link fixed.] Suresh Venkat It's worth mentioning here that first and second order logic are closely related to complexity classes, via an area of complexity theory called descriptive complexity. A famous result proved by Ron Fagin showed that NP is precisely the class of languages expressible by predicates in existential second order logic, a result that is very intriguing in how it connects "syntax" and "algorithmic complexity". Other results include the characterization of P by first order logic + a fixed point operator + an ordering operator, and results for lower complexity classes, many of these results shown by Neil Immerman. http://en.wikipedia.org/wiki/Descriptive_complexity Oleg Izhvanov A technical note: the last link in the post is broken. "And, in practice, there's nothing wrong with just using plain old mathematical English". Dear Suresh and Oleg, Thanks for the references and corrections! Ori Gurel-Gurevich "Note that the category of all finite-dimensional vector spaces is not a set (for reasons relating to Russell's paradox) and so we cannot view \dim as a function. More generally, many statements in category theory dealing with large categories seem to not be expressible in first order logic." That is true if we restrict ourselves to ZFC. However, an extended set theory like NBG is more suitable for such things. This would allow one to talk about such class functions as without resorting to second order theories. There's also the additional benefit of NBG being finitely axiomatizable. Andy D I believe I can show that the statement (*) Terry was discussing is not FO-definable for a particular FO-definable quaternary relation Q(x, x', y, y'). Namely, let Q(x, x', y, y') be (y != x) and [(x = x') implies (y = y')] and [ (x != x') implies (y != y')]. Then (somebody please check this) statement (*) is true iff there exists a perfect matching over the universe's elements. Any infinite universe has such a matching; so the negation of (*) is true iff the universe is finite and has an odd number of elements (and the class of FO-definable properties is closed under negation). But universe parity is known not to be FO-definable. FO-inexpressibility results are to my knowledge generally shown by one of two approaches: i) analysis of the associated 'Ehrenfeucht-Fraisse-Game', an approach which can show inexpressibility of parity; ii) reduction to a known FO-inexpressible property, as above. The most common reduction targets seem to be parity/counting properties, and graph connectivity. I'd heard of branching quantifiers before, but didn't know what they were good for (except that some linguists, like Hintikka, have suggested that they might be good for analyzing natural language). It's good to have some examples now! David Corfield As Kenny's link mentions, branching quantifiers have been treated by Hintikka in his 'independence-friendly' logic. The formula with four variable mentioned corresponds to a 2 vs 2 player game with incomplete information. Samson Abramsky, the theoretical computer scientist, has done some interesting work looking into the syntax and semantics of logics corresponding to multi-player games, with one ulterior aim being the modelling of concurrency. OK, looks like what I wrote is subsumed by the results mentioned on Q_H in Kenny's reference. (*) seems like a surprisingly powerful schema. Kevin O'Bryant Here's another example dear to your heart. For each positive integer k, the statement, "the set A contains an arithmetic progression of length k" is transparently FO, while the statement "for each positive integer k, the set A contains an arithmetic progression of length k" is not (at least not in any straightforward manner). A famously non-FO statement is "Every set with an upper bound has a least upper bound", which is true for the reals, but not true for the non-standard reals, and so cannot be FO. This is, it seems to me, part of the undertapped mojo of nonstandard analysis (beyond epsilon management). You get access to different second order theorems (you lose a few but also gain a few) to help you prove first order theorems. Dear Kevin, I'm not sure I understand your examples, at least if I am interpreting "set" in an internal sense. Wouldn't be a first-order sentence that asserts that A contains arbitrarily long progressions? Similarly, the statement that every set with an upper bound has a least upper bound is still true for the non-standard reals if one restricts attention to non-standard sets of reals (as opposed to sets of non-standard reals). If one interprets the concept of "set" internally then seems to be a first-order sentence (albeit a rather ugly one) that asserts that every (non-empty) set with an upper bound has a least upper bound. I think in your first example you mean rather than . As for the second example, it's always sort of sneaky to quantify over sets in a first-order way – when you do that, you get non-standard models where the objects are just fine, but you're missing some of the sets or something. I know people working on reverse mathematics (like Harvey Friedman and Steven Simpson at various points) do that with models of "second-order" arithmetic (really just first-order arithmetic where they use first-order quantification over sets as well) to prove that certain standard theorems are independent of weakened axiom systems. Scott McKuen Terry's first example (mod Kenny) looks good if we're willing to accept infinitely-long arithmetric progressions in the nonstandard models (since the variable k then runs over nonstandard positive integers as well). If we just want arbitrarily-long-but-finite progressions, which I think is what Kevin intended, then it fails. If we're working outside the theory of arithmetic, say in fields, and we don't have a predicate to designate the integers as a definable set, then it doesn't work. Emmanuel Kowalski Restricting the "domain" of quantification (I'm not sure which is the technically correct word in logic/model theory) may make a huge difference in what sets first order theory can define. One very cute example, which does not seem to be so well-known, considers definable sets in the first order theory of rings (+, -, *, 0, 1, …) applied to finite fields: if we have a formula with , it has been shown by Chatzidakis, van den Dries and Macintyre (J. Reine angew. Math. 427 (1992), 107–135) that the number of "solutions" in a finite field (i.e., the number of in for which the formula is "true", with the standard interpretation of the language, and all quantifiers implicitly running over only) is always (either 0 or) of the type where is a rational number (which may depend on , but can only take finitely many values for a given ), and is an integer with the same property. This is of course reminiscent (and ultimately derived from) the Lang-Weil estimate for the number of points on an algebraic variety over a finite field. So in particular the subfield of is not definable in the first order theory of rings, i.e., there is no formula , for which the set of "solutions" in is for infinitely many . (Come to think of it, I'm not sure how using second order theory of rings could help here; of course, adding something like the Frobenius automorphism in the language would solve the problem). (Note: one can even add exponential sums into the mixture and show cancellation statements in exponential sums over such definable sets, similar to the results of Weil and Deligne on exponential sums over finite fields — at least the simplest statements extend –; in particular, it follows that an "interval" (or an arithmetic progression) in is also not definable in this language (unless it's a fixed number of elements, or the complement of such a thing). This is not surprising, but for intervals with positive density one needs more than the result of C-vdD-M — namely some additive characters do not cancel appreciably over a long interval/arithmetic progression). I'm confused about how the 2×2 branching quantifier is supposed to support the "there are infinitely many" quantifier. Here' s the quantifier-free clause in question for a generic formula (phi): The three parts appear to say 1) something satisfies the formula; 2) if you have one satisfier, you can get another that isn't just the starting point (no loops that start at "a"); 3) the map from one satisfier to the next is a bijection (no loops can start later on, either). Why does quantifying this way: force "there are infinitely many" but quantifying, say, like this: does not? Can we exhibit a finite model of the first-order version? I'm assuming that your final display was of a standard first-order sentence rather than a branching quantifier one, despite the line break. In that case, the problem with that sentence is that is allowed to depend on as well as ; once you allow that, the clause is very easy to satisfy even for a model in which is only satisfied at two values, a and b; one just sets to always equal b, and to equal when , and a otherwise. It seems that the branching quantifier offers a way to define the concept of a class function in a manner which can't be replicated in first order logic, because one needs to assert that equal inputs force equal outputs, and you can't do this if one of the outputs is allowed to depend on the other input. Hi, Terry, That was my anonymous bad LaTeX above – yes, no line break intended. Thanks for the example – now it's clear why the bijection isn't enforced. Rewriting the thing with Skolem functions helped, too. While I find non-firstorderability very interesting, I think it's the norm rather than the exception, which is why we use set theory rather than first-order models for everything. For example, first-order logic can't express the Archimedean axiom, or the Riemann hypothesis. Or am I misunderstanding? Dear Walt, If you combine first-order logic with a standard set theory such as ZFC, then I agree that you can describe, say, Riemann hypothesis (though I won't try to write down the actual sentences here, they would be atrociously long), because one can use set theory to model objects such as the zeta function. (The Archimedean axiom seems to me to be expressible in the first-order language of the integers and reals without any need for set theory.) What was surprising, to me at least, was that even with ZFC, statements such as the existence of an integer dimension for each finitely generated vector space do not seem to be easily firstorderisable; set theory doesn't directly help here because it cannot describe class functions such as unless one performs some trickery to reduce the size of the category of vector spaces to the point where it can be modeled by a set rather than a class. As Ori pointed out though, if one uses von Neumann-Bernays-Godel theory instead of ZFC, one can deal with classes and class functions more easily and many of these issues seem to disappear. Since NBG and ZFC are equivalent when restricted to sentences that refer only to sets, the nonfirstorderisability issue is mostly moot, but it still is more subtle than one might naively first imagine. I see what you're getting at. I think the ZFC-ists answer would be to translate it into a collection of statements, parametrized by ordinals, so that for each ordinal a you can state and prove that dim exists for the set V_a in the von Neumann heirarchy. (That's probably what you mean by "some trickery".) Actually, I think you can basically formalize this in ZFC. You can't formalize your statement exactly, of course, since it involves branching quantifiers, but you can formalize a statement in ZFC that implies yours. I don't know much about branching quantifiers, though, so I could be getting this totally wrong. Let P(V,n) be the propositional formula that specifies that V is a finite-dimensional vector space, and n is its unique dimension. Then you should be able to prove a statement like: (all V)(exists n)P(V,n) and ( (all V)(exists n)P(V,n) implies Q(V,V',n,n') ), where Q is one of your theorems about vector spaces. I think you can show this will imply your branching quantifier statement by Skolemization. P(V,n) turns into P(V, n(V)), Q(V,V',n,n') would turn into Q(V, V', n(V, V'), n'(V, V')), but you can prove by the uniqueness of dimension that n(V,V') = n(V) and n'(V, V') = n'(V'). (Though as I said, I don't know much about branching quantifiers, so this may be more of a parade of my ignorance than anything else.) Regarding the first-orderizability of containing arbitrarily long arithmetic progressions… Terry's sentence only captures what we mean by arithmetic progressions in some settings. The first quantifier is inherently referring to the natural numbers, while the other quantifiers are referring to elements of the model. If the only models we're interested in contain the ring of integers (so that makes the usual sense), then it's fine. The problem with this arises because we talk about arithmetic progressions in general abelian groups. In a group the statement does not make sense. For , for example, we can replace with , and so for each particular we can make it make sense. But to make it make sense for all , we need to quantify over the natural numbers. By definition "first order" means that we quantify only over elements of the model. For example, we all concur precisely on the meaning of "arithmetic progression" in the group of rational points on an elliptic curve. The set contains terms in arithmetic progression (for a natural number k) if . For any particular natural number , those dots represent finite strings. But without the natural numbers intrinisically available to us, we cannot quantify over . Do we ever talk about axiom systems that are modelled by elliptic curves?! I thought an elliptic curve was just another set sitting in ZFC or NBG or NF or whatever. Therefore, normally when we speak of elliptic curves, they are inside our set theory, and the integers are available to us. Working with axioms systems like "first order theory of [something]s", where [something] could be groups, rings, elliptic curves, etc, is definitely something important. I'm not sure about elliptic curves, but the first-order theory of rings (and fields, which can be defined in this language) has many applications in algebra and number theory. The fact that some sets can be defined in such a way can have important consequences for applications. This can be thought-of (maybe) as an analogue of "smoothness" for functions: not all functions are smooth, but if you know that one is, it can be very useful! E.g., the unit group of a ring can be defined by , but not the torsion subgroup of the unit group, because there is no a priori upper bound on the order of a torsion unit — note of course if we can invoke integers, the torsion subgroup is easy to define… As an example of application, using the results of Chatzidakis, van den Dries and Macintyre I mentioned in my previous comment on definable sets in finite fields (and other logical things I don't understand, including ultrafilters), Hrushovsky and Pillay managed to prove the following theorem (Crelle 462, 1995, p. 69–91, Prop. 7.3): "Let be any Zariski-dense finitely generated subgroup of a (almost simple, simply-connected) algebraic group defined over , for instance , or or ; then for almost all primes , the reduction map is surjective." Notice the statement is purely algebraic and very natural (and beautiful). In fact it was proved earlier by Matthews, Vaserstein and Weisfeiler, who used the classification of finite simple groups! Even for , this is not so obvious to prove by hand, and for a Zariski-dense subgroup, which can be a very thin subset, this is really deep. Logic games « Forking, Forcing and Back-and-Forthing […] a recent post, Terence Tao evokes statements that cannot be formalized in first-order logic. David Corfield has […] Your statement concerning dimension is a first order one as long as you specify what your predicate P(V,n) is – i.e. you give an explicit definition of the concept of dimension. If you don't actually want to write down this definition, and instead want to assert the abstract existence of a concept of dimension which satisfies various properties, then you need to pass to second-order logic (as you will need a second-order quantifier such as ). Thanks for the clarification. I guess the moral is that statements involving integers can require the integers to be part of the language in order to express them in first-order logic; similarly statements involving sets or functions may require set theory to be firstorderisable; and statements involving class functions may require a language that can express the concept of a class, such as NBG. The definition of firstorderizability, as far as I understand it, is that a sentence in a theory is firstorderizable if its truth value (as you range over models of the theory) is the same as a first-order sentence. While your sentence requires second-order logic to express, its truth value over models of ZFC is the same as the sentence using the explicit definition. In that sense, your sentence is firstorderizable. Technically you are correct, but by this definition any statement that one can actually prove in ZFC becomes firstorderizable for trivial reasons, because it has the same truth value as a tautology such as 0=0. But one can easily tweak the sentence to avoid this, by replacing the question "do vector spaces have a unique notion of dimension?" to "does a [insert vector-space like object here] have a unique notion of dimension?", where we view the concept of a vector-space like object as a primitive. For instance, one could imagine that one has a class of vector-space-like objects which form an abelian category with a distinguished "one-dimensional" object (the analogue of in the actual vector space example), and one can ask whether this class has a notion of dimension which obeys all the properties listed above (replacing surjections by epimorphisms, replacing by the distinguished one-dimensional object, and so forth). If we model this class of objects by a unary predicate (e.g. Vec(V) is the statement that V is a vector-space-like object), then the question "Does the class Vec have a unique notion of dimension?" becomes a statement which I believe is not firstorderisable in ZFC, though I might be wrong on this if there is a sufficiently canonical way to define dimension which works for all abelian categories that admit such a notion. I'm only trying to be technically correct. :-) Timothy Chow It may be worth mentioning that the inability of ZFC to talk about proper classes directly is not as serious a handicap as it might seem at first glance. The standard dodge, as explained for example in Kunen's book on set theory, is to use formulas to represent (definable) classes. Thus we can't talk about the class of all vector spaces directly, but we can write down a formula that expresses "x is a vector space" and for almost all purposes this formula serves as a perfectly fine surrogate for the class of all vector spaces. For example, the dimension "function" is most intuitively thought of as a map from the class of all vector spaces to the natural numbers, or in other words as a class of ordered pairs (V,d) where V is a vector space and d is a natural numbers. Though this is a proper class and can't be "talked about directly" in the first-order language of set theory, we can write down a formula that expresses "d is the cardinality of a basis for V" and use this formula whenever we want to say something about the dimension "function." If using formulas as surrogates for classes gets too cumbersome, then another trick is to use a set , where is a strongly inaccessible cardinal, as a surrogate for the class of all sets. is a model for ZFC, so for any fact about the set-theoretic universe that you might want to prove using the ZFC axioms, there is a corresponding fact about . The point is that is something you can talk about directly since it's a set. So by pretending that is the class of all sets, you can take all the things you want to say about classes and formalize them as statements about sets in . This can all be done in ZFC + "there exists a strongly inaccessible cardinal" (or maybe a slightly stronger large cardinal axiom if you are doing some really fancy reasoning with classes). Strictly speaking you're not really talking about proper classes but about all sets in , but since is a model of ZFC, this is typically good enough. These kinds of tricks demonstrate why a system like Mizar, which is based on ZFC + a large cardinal axiom, is expected to be powerful enough to formalize any mathematical proof we might care to encode in it. If Mizar couldn't even formalize the dimension of a vector space then clearly it wouldn't be sufficient to formalize all of mathematics! 4 February, 2011 at 2:15 pm Vadim Tropashko Why [formal] conjunction of isn't equivalent to Because the y and y' in the first sentence need not match up with the y and y' in the second sentence. An explicit counterexample is given by the predicate Q(x,x',y,y') = " ". For this choice of predicate, the first and second of your sentences are true, but the third sentence is false. All three statements are propositions which are closed formulas involving quaternary predicate Q(_,_,_,_), so my intuition fail to see why variables should match. In your counter example can you please add parenthesis: is it set membership relation equal to y, or binary identity relation y=y' being member of a set with two elements? (In second interpretation it is symmetric with respect to y and y') Perhaps you have meant " y = y' \wedge y' \in \{x,x'\} "? Still this predicate is symmetric with regards to y and y'. 5 February, 2011 at 12:30 am Yes, in my counterexample, Q(x,x',y,y') is the assertion that and that , and is thus symmetric in y and y'. Equivalently, Q(x,x',y,y') is the assertion that either or . Perhaps it will be easier to understand the distinction between the three sentences by adopting a game-theoretic approach. Consider the following (cooperative) game involving two players. The first player is of age x, and the second player is of age x'. In the game, the first player writes down a number y, and the second player writes down a number y', and then both numbers are revealed. The players win the game if (a) the two numbers are the same; AND (b) this number is either equal to the age x of the first player, or the age x' of the second player. Initially, both players know their own age, but not the age of the other player. The players are allowed to discuss strategy beforehand, but there are three variants of the game, depending on what information can be shared: Variant 1: the second player may learn the age x of the first player, but the first player may not learn the age x' of the second player (i.e. the first player can only use x). Variant 2: the first player may learn the age x' of the second player, but the second player may not learn the age x of the first player (i.e. the second player can only use x'). Variant 3: neither player may learn the age of the other (i.e. the first player can only use x, and the second player can only use x'). It is easy to always win Variant 1: both players write down the number x (which they both know). However, it is not possible in general to win Variant 3, because they do not both know x and they do not both know x', and so cannot find a strategy that obeys both (a) and (b). OK, let [grudgingly] admit branching quantifier exists:-) Is the expression all x exists y all xx exists yy R(x,xx,y,yy) all xx exists yy all x exists y R(x,xx,y,yy) stronger or weaker than Q_H(x,xx,y,yy) R(x,xx,y,yy) ? Next, your counter example actually satisfies stronger assertions all x exists y exists yy all xx Q(x,xx,y,yy) all xx exists y exists yy all x Q(x,xx,y,yy) Is this expressible via FOL + Henkin's quantifier? Allen Mann Dear Professor Tao, In your second reply to Vadim (on 5 February 2011) you explain the semantics of a branching quantifier sentence by interpreting the individual quantifiers ( , ) as moves in a two-player game. This is the essential insight that led Hintikka and Sandu to develop game-theoretic semantics. The semantic game for a first-order sentence is a contest between two players. The existential player (Eloise) tries to verify the sentence by choosing the values of existentially quantified variables, while the universal player (Abelard) tries to falsify it by picking the values of universally quantified variables. Disjunctions prompt the verifier to choose which disjunct she wishes to verify; conjunctions prompt the falsifier to pick which conjunct he wishes to falsify. Negation tells the players to switch roles. (To verify , one must falsify .) Notice that, in your example, first two variants of the game are essentially games with perfect information, while the third variant is a game with imperfect information. First-order logic with imperfect information is an extension of first-order logic that explicitly allows semantic games with imperfect information. There many syntactic devices one can use to indicate what information is available to the players. In independence-friendly (IF) logic, one adds a "slash set" to each quantifier that contains all of the previously quantified variables from which the quantifier is independent. For example, the fact that the universe has infinitely many elements can be expressed by the IF sentence the Skolemization of which is latex c$ is a constant. The Skolemization is satisfiable if and only if the universe admits a non-surjective injection (i.e., the universe is Dedekind infinite). Gabriel Sandu, Merlijn Sevenster, and I have recently published a book, Independence-Friendly Logic, that emphasizes the game-theoretic approach to logic, including first-order logic with imperfect information. Dependence logic is another approach to logic with imperfect information. Instead of "slashing" each quantifier, dependencies between variables are indicated using new atomic formulas called dependence atoms. For example, the dependence atom asserts that the value of the variable depends on (only) the values of $\latex x_{1}, \ldots, x_{n}$. The sentence can be expressed in dependence logic as CORRECTION. The variable in the sentence above should be existentially quantified: ivvrrzc I guess, the original (*) sentence can be 1st order deciphered as follows: \forall x \exists y \forall x_0 \exists y_0 \forall x' \exists y' [O(x,x',y,y') \and Q(x_0,x',y_0,y')]. Of course, I agree with the aouthor that "plain old mathematical English," i.e. expanding the language by the exactly intended two functions, is the best. Unfortunately, this sentence is not equivalent to (*). If, for instance, all variables are quantified over a set with at least three elements, and is the assertion that , then (*) is false, but the statement you write is true. I think, "\forall x \exists y \forall x_0 \exists y_0 \forall x' \exists y' [O(x,x',y,y') is true and is equivalent to Q(x_0,x',y_0,y')]" gives the correct semantics of (*). Again, the counterexample I mentioned previously (in which is the assertion that ) demonstrates that this is not the case. It is clear that (*) is not true for this choice of Q (since y', which depends only on x', cannot avoid every value of x), but your statement is true. (For each x, choose y=x; for each x_0, choose y_0=x_0; and for each x', choose y' to be an element that is not equal to either x or x_0. Then Q(x,x',y,y') and Q(x_0,x',y_0,y') are both true and thus also equivalent to each other.) Dear Professor Tao, first of all, thank you for your patience and teaching me more sophistication. Now, is writing "\forall x \exists y \forall x' \exists y' \forall x_0 [O(x,x',y,y') \and (O(x,x',y,y') \implies Q(x_0,x',y,y'))] " satisfactory? Sorry, or rather: "\forall x \exists y \forall x' \exists y' \forall x_0 \exists y_0 [O(x,x',y,y') \and (O(x,x',y,y') \implies Q(x_0,x',y_0,y'))]. " Sean Eberhard Just a little note: you parenthetically mentioned that "the only order-preserving field automorphism of the reals is the identity". In fact you don't need to assume order-preservation: it's a consequence! 3 August, 2012 at 11:42 am Nieludzka logika… « FIKSACJE […] Nonfirstorderizability – zjawisko polegające na tym, że pewne wyrażenia logiczne nie dają się zapisać jako zdania w logice pierwszego rzędu. Nic w tym dziwnego, takich stwierdzeń jest bardzo dużo, niektóre całkiem "proste" jak np. "przestrzeń wektorowa skończonego wymiaru", lub "zbiór o skończone liczbie elementów". Okazuje się, że istnieje jednak podejście nazywane Branching które ogólnie polega na porzuceniu liniowego następstwa kwantyfikatorów ogólnych ( "dla każdego" …) i wprowadzeniu dodatkowej konwencji że mogą być one używane "jednocześnie", lub "atomowo w grupie". Po wprowadzeniu takiej modyfikacji uzyskujemy logikę która nadal jest słabsza niz logika drugiego rzędu ( a więc nie ma kwantyfikacji po zdaniach) jednak jest mocniejsza niż logika pierwszego rzędu. W szczególności pozwala na wyrażanie stwierdzeń o prawdziwej niezależności dwu i więcej zmiennych ( proszę porównać: https://terrytao.wordpress.com/2007/08/27/printer-friendly-css-and-nonfirstorderizability/) […] Dear Terry! I am certainly a latecomer to this discussion, but thank you for sharing the idea of nonfirstorderisability, it's very interesting! :-) Just a little question: You write: (note that this allows y' to depend on y also, but this turns out to be moot, because y depends only on x)" I don't understand your last comment within the parantheses. Is it even possible for a variable to depend on a variable that is bound by an *existential* quantifier? I always thought that existential variables are the only type of variable that can depend on another variable and that these existential variables can only depend on universal variables. ("Existential variable" means "variable bound by an existential quantfier", similarly "universal variable") One can Skolemize a first-order logic sentence either by replacing the existential variables with functions of all preceding universal variables, or by replacing them with all preceding variables including the existential ones, but the latter replacement adds no additional generality since, as noted in the text, the existential variables included would in turn be dependent on prior universal variables. 1. I'm not familiar with the terminology "a variable depends on another variable". Where can one read the definition of this concept? (Since you said that one uses them in Analysis: can one find them in an introduction to analysis?) 2. Also, I'm a little confused reading the thread http://mathoverflow.net/questions/118254/usage-of-set-theory-in-undergraduate-studies/118261 where Andrej Bauer points out: "We should never say that one variable depends on another." Whom should I believe? 3. When you for example say "For every x and x', there exists a y depending only on x and a y' depending only on x' such that Q(x,x',y,y') is true", do you mean the second-order sentence that says "There is are functions f and f' such that for any x and x', Q(x, x', f(x), f(x'))" 4. How can one prove that if a variable y depends only on a variable x, and the variable z only depends on x, that then z depends only on x? In the strict sense, Andrej is correct: variables in a first-order sentence are not functions of other variables. However, when one converts such a first-order sentence to an equivalent (or more precisely, equisatisfiable) Skolem normal form, the existential variables become functions of preceding variables (and depending on how one carries out this form, the function either only involves preceding universal variables, or can be a function of both the preceding universal and existential variables, with the latter in turn being functions of further variables). What about questions 3 and 4? Dear Terry, I think that the problem with first-order logic you describe is really Russell's paradox in disguise. That the statement about dimensions and vector spaces is not formalizable in FOL + ZFC is due to the fact that in ZFC, one cannot speak about proper classes (and thus also not about proper class functions). But also a class theory like Morse-Kelley can't express every mathematically meaningful statement, like statements about metaclasses (that is, arbitrary collections of classes). As Gödel pointed out in his famous paper "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme 1": "The true source of the incompleteness attaching to all formal systems of mathematics, is to be found — as will be shown in Part II of this essay — in the fact that the formation of ever higher types can be continued into the transfinite (cf. D. Hilbert 'Über das Unendliche', Math. Ann. 95, p. 184), whereas in every formal system at most denumerably many types occur. It can be shown, that is, that the undecidable propositions here presented always become decidable by the adjunction of suitable higher types (e.g. of type ω for the system P). A similar result also holds for the axiom system of set theory." So one can always add another stage to the ontology of a formal system. Also, a logic that can express dependence relations between variables has been developed by Hintikka. See https://en.wikipedia.org/wiki/Independence-friendly_logic "Hintikka's proposal that IF logic and its extended version be used as foundations of mathematics has been met with skepticism by other mathematicians, including Väänänen and Solomon Feferman." You say "I can't quite show that (*) is not expressible in first-order logic". And (*) is this statement: How can one formally define/express things like "(*) is not expressible in first-order logic"? 2 January, 2021 at 9:48 am This phenomenon is very well known among logicians, but I hadn't learned about it until very recently, and had naively assumed that first order logic sufficed for "everyday" usage of mathematics. Can one now say that second-order logic is sufficient for "everyday" usage of mathematics, particularly analysis? Are there examples in analysis that one actually needs the power of a higher order logic (than second order)? The order of logic used in a formal argument is not actually all that good of a proxy for the "power" of that argument. Most instances of second-order (or higher-order) logic involving basic mathematical objects such as numbers, for instance, can be replicated quite faithfully as first-order logic in a set theory such as ZFC, which can encode not only numbers but sets of numbers, functions from numbers to other numbers, and so forth. And then there are arguments that go outside of a single formal logical system (first-order, second-order, or whatever) and invoke metamathematical arguments; for instance in nonstandard analysis one often appeals to the metamathematical transfer principle (Los's theorem) to convert a first-order theorem in standard analysis to its nonstandard counterpart. Again, one can usually formalise this sort of maneuver within first-order ZFC if desired. So I would say that for analysis first-order ZFC is usually more than sufficient as a formal structure to model one's arguments, although one is not necessarily confined to this structure if one prefers to work in some other formal structure (or to reason less formally). Outside of analysis, one sometimes has to manipulate very large categories, in which case it is sometimes convenient to work with Grothendieck universes or similar objects which can't quite be constructed purely within first-order ZFC. My understanding though is that in practice these universes are more convenient technicalities than essential components to the arguments. « Random matrices: the circular law PCM "deleted scene": Wave maps »
CommonCrawl
Total $\{k\}$-domination in special graphs MFC Home August 2018, 1(3): 201-253. doi: 10.3934/mfc.2018010 Influence analysis: A survey of the state-of-the-art Meng Han 1,, and Yingshu Li 2, Kennesaw State University, 1100 South Marietta Pkwy, Marietta, GA, 30060, USA Georgia State University, 25 Park place, Atlanta, GA, 30303, USA * Corresponding author: Meng Han Received December 2017 Revised February 2018 Published July 2018 Full Text(HTML) Figure(13) / Table(2) Online social networks have seen an exponential growth in number of users and activities recently. The rapid proliferation of online social networks provides rich data and infinite possibilities for us to analyze and understand the complex inherent mechanism which governs the evolution of the new online world. This paper summarizes the state-of-art research results on social influence analysis in a broad sense. First, we review the development process of influence analysis in social networks based on several basic conceptions and features in a social aspect. Then the online social networks are discussed. After describing the classical models which simulate the influence spreading progress, we give a bird's eye view of the up-to-date literatures on influence diffusion models and influence maximization approaches. Third, we present the applications including web services, marketing, and advertisement services which based on the influence analysis. At last, we point out the research challenges and opportunities in this area for both industry and academia reference. Keywords: Influence analysis, graph theory, network topology. Mathematics Subject Classification: Primary: 68R10, 68W01; Secondary: 68W25. Citation: Meng Han, Yingshu Li. Influence analysis: A survey of the state-of-the-art. Mathematical Foundations of Computing, 2018, 1 (3) : 201-253. doi: 10.3934/mfc.2018010 I. Abraham, S. Chechik, D. Kempe and A. Slivkins, Low-distortion inference of latent similarities from a multiplex social network, SIAM J. Comput., 44 (2015), 617–668, arXiv: 1202.0922. doi: 10.1137/130949191. Google Scholar R. Agrawal, Nature of information, people, and relationships in digital social networks. Google Scholar R. Agrawal, M. Potamias and E. Terzi, Learning the nature of information in social networks, 2012. Google Scholar C. Ai, M. Han, J. Wang and M. Yan, An efficient social event invitation framework based on historical data of smart devices, in Social Computing and Networking (SocialCom), 2016 IEEE International Conferences on, IEEE, 2016,229–236. doi: 10.1109/BDCloud-SocialCom-SustainCom.2016.44. Google Scholar H. Albinali, M. Han, J. Wang, H. Gao and Y. Li, The roles of social network mavens, in The 12th International Conference on Mobile Ad-hoc and Sensor Networks (MSN 2016), 2016, 1–12. doi: 10.1109/MSN.2016.009. Google Scholar A. Anagnostopoulos, R. Kumar and M. Mahdian, Influence and correlation in social networks, in Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, Las Vegas, Nevada, USA, 2008, 7–15. doi: 10.1145/1401890.1401897. Google Scholar C. Anagnostopoulos, S. Hadjiefthymiades and E. Zervas, An analytical model for multi-epidemic information dissemination, J. Parallel Distrib. Comput., 71 (2011), 87-104, 1891295. doi: 10.1016/j.jpdc.2010.08.010. Google Scholar T. C. Antonucci, K. J. Ajrouch and K. S. Birditt, The convoy model: Explaining social relations from a multidisciplinary perspective, The Gerontologist, 54 (2014), 82-92. doi: 10.1093/geront/gnt118. Google Scholar S. E. Asch, Opinions and social pressure, Readings about the social animal, 193 (1955), 17-26. doi: 10.1038/scientificamerican1155-31. Google Scholar C. C. I. Aslay, W. Lu, F. Bonchi, A. Goyal and L. V. S. Lakshmanan, Viral marketing meets social advertising: Ad allocation with minimum regret, Proceedings of the VLDB Endowment VLDB Endowment Hompage Archive, 8 (2015), 814-825. doi: 10.14778/2752939.2752950. Google Scholar D. B. Bahr, R. C. Browning, H. R. Wyatt and J. O. Hill, Exploiting social networks to mitigate the obesity epidemic, Obesity (Silver Spring), 17 (2009), 723-728. doi: 10.1038/oby.2008.615. Google Scholar E. Bakshy, D. Eckles, R. Yan and I. Rosenn, Social influence in social advertising: Evidence from field experiments, in Proceedings of the 13th ACM Conference on Electronic Commerce, ACM, Valencia, Spain, 2012,146–161. doi: 10.1145/2229012.2229027. Google Scholar E. Bakshy, J. M. Hofman, W. A. Mason and D. J. Watts, Everyone's an influencer: quantifying influence on twitter, in Proceedings of the fourth ACM international conference on Web search and data mining, ACM, Hong Kong, China, 2011, 65–74. doi: 10.1145/1935826.1935845. Google Scholar E. Bakshy, I. Rosenn, C. Marlow and L. Adamic, The role of social networks in information diffusion, in Proceedings of the 21st International Conference on World Wide Web, ACM, Lyon, France, 2012,519–528. doi: 10.1145/2187836.2187907. Google Scholar N. Barbieri and F. Bonchi, Influence maximization with viral product design, Proceedings of the 2014 SIAM International Conference on Data Mining, 2014, p9. doi: 10.1137/1.9781611973440.7. Google Scholar N. Barbieri, F. Bonchi and G. Manco, Topic-aware social influence propagation models, in Proceedings of the 2012 IEEE 12th International Conference on Data Mining, IEEE Computer Society, 2012, 81–90. Google Scholar S. Bhagat, A. Goyal and L. V. S. Lakshmanan, Maximizing product adoption in social networks, in Proceedings of the Fifth ACM International Conference on Web Search and Data Mining, ACM, Seattle, Washington, USA, 2012,603–612. doi: 10.1145/2124295.2124368. Google Scholar S. Bharathi, D. Kempe and M. Salek, Competitive influence maximization in social networks, in Internet and Network Economics, Springer, 2007,306–311. doi: 10.1007/978-3-540-77105-0_31. Google Scholar K. Bhawalkar, S. Gollapudi and K. Munagala, Coevolutionary opinion formation games, STOC'13Proceedings of the 2013 ACM Symposium on Theory of Computing, 41–50, ACM, New York, 2013. doi: 10.1145/2488608.2488615. Google Scholar F. Bonchi, Influence propagation in social networks: A data mining perspective, IEEE Intelligent Informatics Bulletin, 12 (2011), 8-16. doi: 10.1109/WI-IAT.2011.292. Google Scholar C. Borgs, M. Brautbar, J. Chayes and B. Lucier, Maximizing social influence in nearly optimal time, Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, 946–957, ACM, New York, 2014. doi: 10.1137/1.9781611973402.70. Google Scholar A. Borodin, Y. Filmus and J. Oren, Threshold models for competitive influence in social networks, in Proceedings of the 6th international conference on Internet and network economics, Springer-Verlag, Stanford, CA, USA, 2010,539–550. doi: 10.1007/978-3-642-17572-5_48. Google Scholar S. Bourigault, C. Lagnier, S. Lamprier, L. Denoyer and P. Gallinari, Learning social network embeddings for predicting information diffusion, WSDM '14 Proceedings of the 7th ACM International Conference on Web Search and Data Mining, (2014), 393-402. doi: 10.1145/2556195.2556216. Google Scholar C. Budak and R. Agrawal, On participation in group chats on twitter, 2013,165–176. Google Scholar J. T. Cacioppo, J. H. Fowler and N. A. Christakis, Alone in the crowd: the structure and spread of loneliness in a large social network., Journal of Personality and Social Psychology, 97 (2009), 977. Google Scholar J. L. Z. Cai, M. Yan and Y. Li, Using crowdsourced data in location-based social networks to explore influence maximization, in Computer Communications, IEEE INFOCOM 2016-The 35th Annual IEEE International Conference on, IEEE, 2016, 1–9. doi: 10.1109/INFOCOM.2016.7524471. Google Scholar Z. Cai, Z. He, X. Guan and Y. Li, Collective data-sanitization for preventing sensitive information inference attacks in social networks, IEEE Transactions on Dependable and Secure Computing, (2016), p1. doi: 10.1109/TDSC.2016.2613521. Google Scholar J. Cannarella and J. A. Spechler, Epidemiological modeling of online social network dynamics, arXiv preprint, arXiv: 1401.4208. Google Scholar T. Carnes, C. Nagarajan, S. M. Wild and A. Van Zuylen, Maximizing influence in a competitive social network: a follower's perspective, ICEC '07 Proceedings of the Ninth International Conference on Electronic Commerce, (2007), 351-360. doi: 10.1145/1282100.1282167. Google Scholar M. Cha, H. Haddadi, F. Benevenuto and P. K. Gummadi, Measuring user influence in twitter: The million follower fallacy, ICWSM, 10 (2010), 10-17. Google Scholar M. Cha, A. Mislove and K. P. Gummadi, A measurement-driven analysis of information propagation in the flickr social network, 2009,721–730. Google Scholar M. Cha, A. Mislove and K. P. Gummadi, A measurement-driven analysis of information propagation in the flickr social network, in Proceedings of the 18th International Conference on World Wide Web, ACM, Madrid, Spain, 2009,721–730. doi: 10.1145/1526709.1526806. Google Scholar Y. Chang, X. Wang, Q. Mei and Y. Liu, Towards twitter context summarization with user influence models, in Proceedings of the Sixth ACM International Conference on Web Search and Data Mining, ACM, Rome, Italy, 2013,527–536. doi: 10.1145/2433396.2433464. Google Scholar V. Chaoji, S. Ranu, R. Rastogi and R. Bhatt, Recommendations to boost content spread in social networks, in Proceedings of the 21st International Conference on World Wide Web, ACM, Lyon, France, 2012,529–538. doi: 10.1145/2187836.2187908. Google Scholar L. Chen, X. Li and J. Han, Medrank: discovering influential medical treatments from literature by information network analysis, in Proceedings of the Twenty-Fourth Australasian Database Conference, Australian Computer Society, Inc., Adelaide, Australia, 2013, 3–12. Google Scholar S. Chen, J. Fan, G. Li, J. Feng, K.-l. Tan and J. Tang, Online topic-aware influence maximization, Proceedings of the VLDB Endowment, 8 (2015), 666-677. doi: 10.14778/2735703.2735706. Google Scholar W. Chen, A. Collins, R. Cummings, T. Ke, Z. Liu, D. Rincon, X. Sun, Y. Wang, W. Wei and Y. Yuan, Influence maximization in social networks when negative opinions may emerge and propagate, Proceedings of the 2011 SIAM International Conference on Data Mining, (2011), 379-390. doi: 10.1137/1.9781611972818.33. Google Scholar W. Chen, T. Lin and C. Yang, Efficient topic-aware influence maximization using preprocessing, CoRR, abs/1403.0057. Google Scholar W. Chen, Z. Liu, X. Sun and Y. Wang, A game-theoretic framework to identify overlapping communities in social networks, Data Min. Knowl. Discov., 21 (2010), 224-240. doi: 10.1007/s10618-010-0186-6. Google Scholar W. Chen, P. Lu, X. Sun, B. Tang, Y. Wang and Z. A. Zhu, Optimal pricing in social networks with incomplete information, in Internet and Network Economics, Springer, 2011, 49–60. Google Scholar W. Chen, W. Lu and N. Zhang, Time-critical influence maximization in social networks with time-delayed diffusion process, 2012. Google Scholar W. Chen, C. Wang and Y. Wang, Scalable influence maximization for prevalent viral marketing in large-scale social networks, Data Min. Knowl. Discov., 25 (2012), 545-576. doi: 10.1007/s10618-012-0262-1. Google Scholar W. Chen, Y. Wang and S. Yang, Efficient influence maximization in social networks, in Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, Paris, France, 2009,199–208. doi: 10.1145/1557019.1557047. Google Scholar W. Chen, Y. Yuan and L. Zhang, Scalable influence maximization in social networks under the linear threshold model, in Proceedings of the 2010 IEEE International Conference on Data Mining, IEEE Computer Society, 2010, 88–97. doi: 10.1109/ICDM.2010.118. Google Scholar Y. -C. Chen, W. -Y. Zhu, W. -C. Peng, W. -C. Lee and S. -Y. Lee, Cim: community-based influence maximization in social networks, ACM Transactions on Intelligent Systems and Technology (TIST), 5 (2014), Article No. 25. doi: 10.1145/2532549. Google Scholar S. Cheng, H. Shen, J. Huang, W. Chen and X. Cheng, Imrank: Influence maximization via finding self-consistent ranking, SIGIR '14 Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval, (2014), 475-484. doi: 10.1145/2600428.2609592. Google Scholar N. A. Christakis and J. H. Fowler, The spread of obesity in a large social network over 32 years, N Engl J Med, 357 (2007), 370-379. doi: 10.1056/NEJMsa066082. Google Scholar N. A. Christakis and J. H. Fowler, The collective dynamics of smoking in a large social network, New England Journal of Medicine, 358 (2008), 2249-2258. doi: 10.1056/NEJMsa0706154. Google Scholar P. Clifford and A. Sudbury, A model for spatial conflict, Biometrika, 60 (1973), 581-588. doi: 10.1093/biomet/60.3.581. Google Scholar L. Corazzini, F. Pavesi, B. Petrovich and L. Stanca, Influential listeners: An experiment on persuasion bias in social networks, European Economic Review, 56 (2012), 1276-1288. Google Scholar D. Cosley, D. P. Huttenlocher, J. M. Kleinberg, X. Lan and S. Suri, Sequential influence models in social networks., ICWSM, 10 (2010), 26. Google Scholar D. M. Cutler and E. L. Glaeser, Social interactions and smoking, Technical report, National Bureau of Economic Research, (2007), 1-28. doi: 10.3386/w13477. Google Scholar A. Das, S. Gollapudi and K. Munagala, Modeling opinion dynamics in social networks, WSDM '14 Proceedings of the 7th ACM International Conference on Web Search and Data Mining, (2014), 403-412. doi: 10.1145/2556195.2559896. Google Scholar A. Das, S. Gollapudi, R. Panigrahy and M. Salek, Debiasing social wisdom, 2013,500–508. Google Scholar A. Datta, A. Datta, A. D. Procaccia and Y. Zick, Influence in classification via cooperative game theory, arXiv preprint, arXiv: 1505.00036. Google Scholar I. de Sola Pool and M. Kochen, Contacts and influence, Social Networks, 1 (1979), 5-51. doi: 10.1016/0378-8733(78)90011-4. Google Scholar E. D. Demaine, M. Hajiaghayi, H. Mahini, D. L. Malec, S. Raghavan, A. Sawant and M. Zadimoghadam, How to influence people with partial incentives, in World Wide Web Conferences, International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 2014,937–948. doi: 10.1145/2566486.2568039. Google Scholar T. N. Dinh, D. T. Nguyen and M. T. Thai, Cheap, easy, and massively effective viral marketing in social networks: truth or fiction?, in Proceedings of the 23rd ACM conference on Hypertext and Social Media, ACM, Milwaukee, Wisconsin, USA, 2012,165–174. doi: 10.1145/2309996.2310024. Google Scholar P. S. Dodds, R. Muhamad and D. J. Watts, An experimental study of search in global social networks, Science, 301 (2003), 827-829. doi: 10.1126/science.1081058. Google Scholar P. Domingos and M. Richardson, Mining the network value of customers, in Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, San Francisco, California, 2001, 57–66. doi: 10.1145/502512.502525. Google Scholar Y. Dong, R. A. Johnson and N. V. Chawla, Will this paper increase your h-index?: Scientific impact prediction, Machine Learning and Knowledge Discovery in Databases, (2015), 259-263. doi: 10.1007/978-3-319-23461-8_26. Google Scholar Z. Duan, W. Li and Z. Cai, Distributed auctions for task assignment and scheduling in mobile crowdsensing systems, in Distributed Computing Systems (ICDCS), 2017 IEEE 37th International Conference on, IEEE, 2017,635–644. doi: 10.1109/ICDCS.2017.121. Google Scholar Z. Duan, M. Yan, Z. Cai, X. Wang, M. Han and Y. Li, Truthful incentive mechanisms for social cost minimization in mobile crowdsourcing systems, Sensors, 16 (2016), 481. doi: 10.3390/s16040481. Google Scholar I. Eleta, Multilingual use of twitter: Social networks and language choice, in ACM Conference on Computer-Supported Cooperative Work and Social Computing, ACM, New York, NY, USA, 2012,363–366. doi: 10.1145/2141512.2141621. Google Scholar E. Even-Dar and A. Shapira, A note on maximizing the spread of influence in social networks, in Internet and Network Economics, Springer, 2007,281–286. doi: 10.1007/978-3-540-77105-0_27. Google Scholar Y. Fan and C. R. Shelton, Learning continuous-time social network dynamics, 2009,161–168. Google Scholar K. Feng, G. Cong, S. S. Bhowmick and S. Ma, In search of influential event organizers in online social networks, SIGMOD '14 Proceedings of the 2014 ACM SIGMOD International Conference on Management of Data, (2014), 63-74. doi: 10.1145/2588555.2612173. Google Scholar J. H. Fowler, N. A. Christakis, Steptoe and D. Roux, Dynamic spread of happiness in a large social network: Longitudinal analysis of the framingham heart study social network, BMJ: British Medical Journal, 23–27. Google Scholar L. C. Freeman, A set of measures of centrality based on betweenness, Sociometry, 40 (1977), 35-41. doi: 10.2307/3033543. Google Scholar P. J. Giabbanelli, A. Alimadad, V. Dabbaghian and D. T. Finegood, Modeling the influence of social networks and environment on energy balance and obesity, Journal of Computational Science, 3 (2012), 17-27. Google Scholar A. Goyal, F. Bonchi and L. V. S. Lakshmanan, Learning influence probabilities in social networks, in Proceedings of the Third ACM International Conference on Web Search and Data Mining, ACM, New York, New York, USA, 2010,241–250. doi: 10.1145/1718487.1718518. Google Scholar A. Goyal, F. Bonchi and L. V. S. Lakshmanan, A data-based approach to social influence maximization, Proc. VLDB Endow., 5 (2011), 73-84. doi: 10.14778/2047485.2047492. Google Scholar A. Goyal, F. Bonchi, L. V. Lakshmanan and S. Venkatasubramanian, Approximation analysis of influence spread in social networks, arXiv preprint, arXiv: 1008.2005. Google Scholar A. Goyal, F. Bonchi, L. V. Lakshmanan and S. Venkatasubramanian, On minimizing budget and time in influence propagation over social networks, Social Network Analysis and Mining, 3 (2013), 179-192. doi: 10.1007/s13278-012-0062-z. Google Scholar A. Goyal and L. V. S. Lakshmanan, Recmax: Exploiting recommender systems for fun and profit, in Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, Beijing, China, 2012, 1294–1302. doi: 10.1145/2339530.2339731. Google Scholar A. Goyal, W. Lu and L. V. S. Lakshmanan, Celf++: Optimizing the greedy algorithm for influence maximization in social networks, in Proceedings of the 20th International Conference Companion on World Wide Web, Proceedings of the 20th international conference companion on World wide web, ACM, Hyderabad, India, 2011, 47–48. doi: 10.1145/1963192.1963217. Google Scholar A. Goyal, W. Lu and L. V. S. Lakshmanan, Simpath: An efficient algorithm for influence maximization under the linear threshold model, in Proceedings of the 2011 IEEE 11th International Conference on Data Mining, IEEE Computer Society, 2011,211–220. doi: 10.1109/ICDM.2011.132. Google Scholar S. Goyal and M. Kearns, Competitive contagion in networks, STOC'12Proceedings of the 2012 ACM Symposium on Theory of Computing, (2012), 759-774. doi: 10.1145/2213977.2214046. Google Scholar M. Grabisch and A. Rusinowska, A model of influence in a social network, Theory and Decision, 69 (2010), 69-96. doi: 10.1007/s11238-008-9109-z. Google Scholar M. Granovetter, The strength of weak ties, American Journal of Sociology, 78 (1973), l. Google Scholar D. Gruhl, R. Guha, D. Liben-Nowell and A. Tomkins, Information diffusion through blogspace, 2004,491–501. Google Scholar A. Guille, H. Hacid, C. E. C. Favre and D. A. Zighed, Information diffusion in online social networks: A survey, ACM SIGMOD Record, 42 (2013), 17-28. doi: 10.1145/2503792.2503797. Google Scholar B. Han, P. Hui, V. A. Kumar, M. V. Marathe, J. Shao and A. Srinivasan, Mobile data offloading through opportunistic communications and social participation, Mobile Computing, IEEE Transactions on, 11 (2012), 821-834. doi: 10.1109/TMC.2011.101. Google Scholar B. Han and A. Srinivasan, Your friends have more friends than you do: identifying influential mobile users through random walks, in Proceedings of the thirteenth ACM international symposium on Mobile Ad Hoc Networking and Computing, ACM, Hilton Head, South Carolina, USA, 2012, 5–14. doi: 10.1145/2248371.2248376. Google Scholar M. Han, Z. Duan, C. Ai, F. W. Lybarger, Y. Li and A. G. Bourgeois, Time constraint influence maximization algorithm in the age of big data, International Journal of Computational Science and Engineering, 15 (2017), 165-175. doi: 10.1504/IJCSE.2017.087401. Google Scholar M. Han, Z. Duan and Y. Li, Privacy issues for transportation cyber physical systems, in Secure and Trustworthy Transportation Cyber-Physical Systems, Springer, Singapore, 2017, 67–86. doi: 10.1007/978-981-10-3892-1_4. Google Scholar M. Han, Q. Han, L. Li, J. Li and Y. Li, Maximizing influence in sensed heterogenous social network with privacy preservation, International Journal of Sensor Networks, (2017), 1-11. doi: 10.1504/IJSNET.2017.10007412. Google Scholar M. Han, J. Li, Z. Cai and Q. Han, Privacy reserved influence maximization in gps-enabled cyber-physical and online social networks, in Social Computing and Networking (SocialCom), 2016 IEEE International Conferences on, IEEE, 2016,284–292. doi: 10.1109/BDCloud-SocialCom-SustainCom.2016.51. Google Scholar M. Han, J. Li and Z. Zou, Finding k close subgraphs in an uncertain graph, Jisuanji Kexue yu Tansuo, 5 (2011), 791-803. Google Scholar M. Han, L. Li, X. Peng, Z. Hong and M. Li, Information privacy of cyber transportation system: Opportunities and challenges, RIIT '17 Proceedings of the 6th Annual Conference on Research in Information Technology, (2017), 23-28. doi: 10.1145/3125649.3125652. Google Scholar M. Han, L. Li, Y. Xie, J. Wang, Z. Duan, J. Li and M. Yan, Cognitive approach for location privacy protection, IEEE Access, 6 (2018), 13466-13477. doi: 10.1109/ACCESS.2018.2805464. Google Scholar M. Han, Y. Liang, Z. Duan and Y. Wang, Mining public business knowledge: A case study in sec's edgar, in Social Computing and Networking (SocialCom), 2016 IEEE International Conferences on, IEEE, 2016,393–400. doi: 10.1109/BDCloud-SocialCom-SustainCom.2016.65. Google Scholar M. Han, J. Wang, M. Yan, C. Ai, Z. Duan and Z. Hong, Near-complete privacy protection: Cognitive optimal strategy in location-based services, Procedia Computer Science, 129 (2018), 298-304. doi: 10.1016/j.procs.2018.03.079. Google Scholar M. Han, M. Yan, Z. Cai and Y. Li, An exploration of broader influence maximization in timeliness networks with opportunistic selection, Journal of Network and Computer Applications, 63 (2016), 39-49. doi: 10.1016/j.jnca.2016.01.004. Google Scholar M. Han, M. Yan, Z. Cai, Y. Li, X. Cai and J. Yu, Influence maximization by probing partial communities in dynamic online social networks, Transactions on Emerging Telecommunications Technologies, 28 (2017), e3054. doi: 10.1002/ett.3054. Google Scholar M. Han, M. Yan, J. Li, S. Ji and Y. Li, Generating uncertain networks based on historical network snapshots, in COCOON, 2013,747–758. doi: 10.1007/978-3-642-38768-5_68. Google Scholar M. Han, M. Yan, J. Li, S. Ji and Y. Li, Neighborhood-based uncertainty generation in social networks, Journal of Combinatorial Optimization, 28 (2014), 561-576. doi: 10.1007/s10878-013-9684-y. Google Scholar M. Han, W. Zhang and J.-Z. Li, Raking: An efficient k-maximal frequent pattern mining algorithm on uncertain graph database, Jisuanji Xuebao(Chinese Journal of Computers), 33 (2010), 1387-1395. doi: 10.3724/SP.J.1016.2010.01387. Google Scholar R. A. Hanneman and M. Riddle, Introduction to social network methods, 2005. Google Scholar D. Hatano, T. Fukunaga, T. Maehara and K. -i. Kawarabayashi, Lagrangian decomposition algorithm for allocating marketing channels, 2015. Google Scholar J. He, J. Hopcroft, H. Liang, S. Suwajanakorn and L. Wang, Detecting the structure of social networks using (α, β)-communities, in Algorithms and Models for the Web Graph, Springer, 6732 (2011), 26–37. doi: 10.1007/978-3-642-21286-4_3. Google Scholar X. He and D. Kempe, Price of anarchy for the n-player competitive cascade game with submodular activation functions, in Web and Internet Economics, Springer, 2013,232–248. doi: 10.1007/978-3-642-45046-4_20. Google Scholar X. He and D. Kempe, Stability of influence maximization, KDD '14 Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2014), 1256-1265. doi: 10.1145/2623330.2623746. Google Scholar X. He, G. Song, W. Chen and Q. Jiang, Influence blocking maximization in social networks under the competitive linear threshold model, 2012,463–474. Google Scholar Z. He, Z. Cai and X. Wang, Modeling propagation dynamics and developing optimized countermeasures for rumor spreading in online social networks, in Distributed Computing Systems (ICDCS), 2015 IEEE 35th International Conference on, IEEE, 2015,205–214. doi: 10.1109/ICDCS.2015.29. Google Scholar Z. He, Z. Cai, J. Yu, X. Wang, Y. Sun and Y. Li, Cost-efficient strategies for restraining rumor spreading in mobile social networks, IEEE Transactions on Vehicular Technology, 66 (2017), 2789-2800. doi: 10.1109/TVT.2016.2585591. Google Scholar M. Heidari, M. Asadpour and H. Faili, Smg: Fast scalable greedy algorithm for influence maximization in social networks, Physica A: Statistical Mechanics and its Applications, 420 (2015), 124-133. doi: 10.1016/j.physa.2014.10.088. Google Scholar C. Hoede and R. R. Bakker, A theory of decisional power, Journal of Mathematical Sociology, 8 (1982), 309-322. doi: 10.1080/0022250X.1982.9989927. Google Scholar J. Hopcroft, T. Lou and J. Tang, Who will follow you back?: Reciprocal relationship prediction, CIKM '11 Proceedings of the 20th ACM International Conference on Information and Knowledge Management, (2011), 1137-1146. doi: 10.1145/2063576.2063740. Google Scholar J. Hu, K. Meng, X. Chen, C. Lin and J. Huang, Analysis of influence maximization in large-scale social networks, SIGMETRICS Perform. Eval. Rev., 41 (2014), 78-81. doi: 10.1145/2627534.2627559. Google Scholar X. Hu, L. Tang, J. Tang and H. Liu, Exploiting social relations for sentiment analysis in microblogging, in Proceedings of the Sixth ACM International Conference on Web Search and Data Mining, ACM, Rome, Italy, 2013,537–546. doi: 10.1145/2433396.2433465. Google Scholar Z. Hu, J. Yao, B. Cui and E. Xing, Community level diffusion extraction, SIGMOD '15 Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, (2015), 1555-1569. doi: 10.1145/2723372.2723737. Google Scholar H. Huang, J. Tang, S. Wu, L. Liu and Others, Mining triadic closure patterns in social networks, WWW '14 Companion Proceedings of the 23rd International Conference on World Wide Web, 2014,499–504. doi: 10.1145/2567948.2576940. Google Scholar J. -P. Huang, C. -Y. Wang and H. -Y. Wei, Strategic information diffusion through online social networks, in Proceedings of the 4th International Symposium on Applied Sciences in Biomedical and Communication Technologies, ACM, Barcelona, Spain, 2011, Article No. 88, 5pp. doi: 10.1145/2093698.2093786. Google Scholar J. Huang, X. -Q. Cheng, H. -W. Shen, T. Zhou and X. Jin, Exploring social influence via posterior effect of word-of-mouth recommendations, in Proceedings of the Fifth ACM International Conference on Web Search and Data Mining, ACM, Seattle, Washington, USA, 2012,573–582. doi: 10.1145/2124295.2124365. Google Scholar J. E. L. Iribarren and E. Moro, Impact of human activity patterns on the dynamics of information diffusion, Physical Review Letters, 103 (2009), 038702. doi: 10.1103/PhysRevLett.103.038702. Google Scholar J. H. Janssen, W. A. IJsselsteijn and J. H. Westerink, How affective technologies can influence intimate interactions and improve social connectedness, International Journal of Human-Computer Studies, 72 (2014), 33-43. doi: 10.1016/j.ijhcs.2013.09.007. Google Scholar S. Ji, Z. Cai, M. Han and R. Beyah, Whitespace measurement and virtual backbone construction for cognitive radio networks: From the social perspective, in Sensing, Communication, and Networking (SECON), 2015 12th Annual IEEE International Conference on, IEEE, 2015,435–443. doi: 10.1109/SAHCN.2015.7338344. Google Scholar F. Jiang, S. Jin, Y. Wu and J. Xu, A uniform framework for community detection via influence maximization in social networks, IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2014), 2014. doi: 10.1109/ASONAM.2014.6921556. Google Scholar A. P. Joshi, M. Han and Y. Wang, A survey on security and privacy issues of blockchain technology, Mathematical Foundations of Computing, 1 (2018), 121-147. Google Scholar K. Jung, W. Heo and W. Chen, Irie: A scalable influence maximization algorithm for independent cascade model and its extensions, arXiv preprint, arXiv: 1111.4795. Google Scholar D. Kempe, J. Kleinberg, S. Oren and A. Slivkins, Selection and influence in cultural dynamics, in Proceedings of the fourteenth ACM conference on Electronic commerce, ACM, Philadelphia, Pennsylvania, USA, 2013,585–586. doi: 10.1145/2492002.2482566. Google Scholar D. Kempe, J. Kleinberg and E. V. Tardos, Influential nodes in a diffusion model for social networks, Automata, Languages and Programming, (2005), 1127-1138. doi: 10.1007/11523468_91. Google Scholar D. Kempe, J. Kleinberg and V. Tardos, Maximizing the spread of influence through a social network, Theory Comput., 11 (2015), 105-147. doi: 10.4086/toc.2015.v011a004. Google Scholar S. Khanna and B. Lucier, Influence maximization in undirected networks, Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, 1482–1496, ACM, New York, 2014. doi: 10.1137/1.9781611973402.109. Google Scholar Y. A. Kim and J. Srivastava, Impact of social influence in e-commerce decision making, in Proceedings of the Ninth International Conference on Electronic Commerce, ACM, Minneapolis, MN, USA, 2007,293–302. doi: 10.1145/1282100.1282157. Google Scholar M. Kimura, K. Saito, R. Nakano and H. Motoda, Extracting influential nodes on a social network for information diffusion, Data Min. Knowl. Discov., 20 (2010), 70-97. doi: 10.1007/s10618-009-0150-5. Google Scholar F. Kooti, W. A. Mason, K. P. Gummadi and M. Cha, Predicting emerging social conventions in online social networks, in Proceedings of the 21st ACM International Conference on Information and Knowledge Management, ACM, Maui, Hawaii, USA, 2012,445–454, 2396820. doi: 10.1145/2396761.2396820. Google Scholar J. Kostka, Y. A. Oswald and R. Wattenhofer, Word of mouth: Rumor dissemination in social networks, in Structural Information and Communication Complexity, Springer, 5058 (2008), 185–196. doi: 10.1007/978-3-540-69355-0_16. Google Scholar R. Kumar, J. Novak, P. Raghavan and A. Tomkins, On the bursty evolution of blogspace, WWW '03 Proceedings of the 12th international conference on World Wide Web, (2003), 568-576. doi: 10.1145/775152.775233. Google Scholar R. Kumar, J. Novak and A. Tomkins, Structure and evolution of online social networks, KDD '06 Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2006), 611-617. doi: 10.1145/1150402.1150476. Google Scholar O. Kwon and Y. Wen, An empirical study of the factors affecting social network service use, Computers in Human Behavior, 26 (2010), 254-263. doi: 10.1016/j.chb.2009.04.011. Google Scholar T. La Fond and J. Neville, Randomization tests for distinguishing social influence and homophily effects, WWW '10 Proceedings of the 19th International Conference on World Wide Web, (2010), 601-610. doi: 10.1145/1772690.1772752. Google Scholar M. Lahiri, A. S. Maiya, R. Sulo, Habiba and T. Y. B. Wolf, The impact of structural changes on predictions of diffusion in networks, in Proceedings of the 2008 IEEE International Conference on Data Mining Workshops, IEEE Computer Society, 2008,939–948. doi: 10.1109/ICDMW.2008.92. Google Scholar X. N. Lam, T. Vu, T. D. Le and A. D. Duong, Addressing cold-start problem in recommendation systems, CUIMC '08 Proceedings of the 2nd International Conference on Ubiquitous Information Management and Communication, (2008), 208-211. doi: 10.1145/1352793.1352837. Google Scholar I. Leftheriotis and M. N. Giannakos, Using social media for work: Losing your time or improving your work?, Computers in Human Behavior, 31 (2014), 134-142. doi: 10.1016/j.chb.2013.10.016. Google Scholar S. Lei, S. Maniu, L. Mo, R. Cheng and P. Senellart, Online influence maximization, KDD '15 Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2015), 645-654. doi: 10.1145/2783258.2783271. Google Scholar J. Leskovec, L. Backstrom and J. Kleinberg, Meme-tracking and the dynamics of the news cycle, KDD '09 Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2009), 497-506. doi: 10.1145/1557019.1557077. Google Scholar J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. VanBriesen and N. Glance, Costeffective outbreak detection in networks, in Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, San Jose, California, USA, 2007,420–429. doi: 10.1145/1281192.1281239. Google Scholar J. Leskovec, M. McGlohon, C. Faloutsos, N. Glance and M. Hurst, Information propagation and network evolution on the web, DA Project, Machine Learning Dept. Carnegie Mellon University. Google Scholar J. Leskovec, M. McGlohon, C. Faloutsos, N. S. Glance and M. Hurst, Patterns of cascading behavior in large blog graphs, Proceedings of the 2007 SIAM International Conference on Data Miningvol, 7 (2007), 551-556. doi: 10.1137/1.9781611972771.60. Google Scholar K. Lewis, M. Gonzalez and J. Kaufman, Social selection and peer influence in an online social network, Proc Natl Acad Sci U S A, 109 (2012), 68-72. doi: 10.1073/pnas.1109739109. Google Scholar C. -T. Li, H. -P. Hsieh, S. -D. Lin and M. -K. Shan, Finding influential seed successors in social networks, in Proceedings of the 21st International Conference Companion on World Wide Web, ACM, Lyon, France, 2012,557–558. doi: 10.1145/2187980.2188125. Google Scholar D. Li, J. Tang, Y. Ding, X. Shuai, T. Chambers, G. Sun, Z. Luo and J. Zhang, Topic-level opinion influence model (toim): An investigation using tencent microblogging, Journal of the Association for Information Science and Technology, 66 (2015), 2657-2673. doi: 10.1002/asi.23350. Google Scholar G. Li, S. Chen, J. Feng, K.-l. Tan and W.-s. Li, Efficient location-aware influence maximization, SIGMOD '14 Proceedings of the 2014 ACM SIGMOD International Conference on Management of Data, (2017), 87-98. doi: 10.1145/2588555.2588561. Google Scholar H. Li, S. S. Bhowmick, A. Sun and J. Cui, Conformity-aware influence maximization in online social networks, The VLDB Journal-The International Journal on Very Large Data Bases, 24 (2015), 117-141. doi: 10.1007/s00778-014-0366-x. Google Scholar J. Li, Z. Cai, J. Wang, M. Han and Y. Li, Truthful incentive mechanisms for geographical position conflicting mobile crowdsensing systems, IEEE Transactions on Computational Social Systems, 5 (2018), 324-334. doi: 10.1109/TCSS.2018.2797225. Google Scholar J. Li, X. Guo, L. Guo, S. Ji, M. Han and Z. Cai, Optimal routing with scheduling and channel assignment in multi-power multi-radio wireless sensor networks, Ad Hoc Networks, 31 (2015), 45-62. doi: 10.1016/j.adhoc.2015.03.006. Google Scholar R.-H. Li, L. Qin, J. X. Yu and R. Mao, Influential community search in large networks, Proceedings of the VLDB Endowment, 8 (2015), 509-520. doi: 10.14778/2735479.2735484. Google Scholar R. Li, S. Wang, H. Deng, R. Wang and K. C. -C. Chang, Towards social user profiling: Unified and discriminative influence model for inferring home locations, in Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, Beijing, China, 2012, 1023–1031. doi: 10.1145/2339530.2339692. Google Scholar Y. Li, W. Chen, Y. Wang and Z. -L. Zhang, Influence diffusion dynamics and influence maximization in social networks with friend and foe relationships, in Proceedings of the sixth ACM international conference on Web search and data mining, ACM, Rome, Italy, 2013,657–666. doi: 10.1145/2433396.2433478. Google Scholar Y. Li, D. Zhang and K.-L. Tan, Real-time targeted influence maximization for online advertisements, Proceedings of the VLDB Endowment, 8 (2015), 1070-1081. doi: 10.14778/2794367.2794376. Google Scholar S.-C. Lin, S.-D. Lin and M.-S. Chen, A learning-based framework to handle multi-round multi-party influence maximization on social networks, KDD '15 Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2015), 695-704. doi: 10.1145/2783258.2783392. Google Scholar Y. Lin and J. Lui, Algorithmic design for competitive influence maximization problems, arXiv preprint, arXiv: 1410.8664. Google Scholar X. Ling, C. Wu, S. Ji and M. Han, H2dos: An application-layer dos attack towards http/2 protocol, in Proceedings of SecureComm: Security and Privacy in Communication Networks 2017, SecureComm '17, 2017. Google Scholar B. Liu, G. Cong, Y. Zeng, D. Xu and Y. M. Chee, Influence spreading path and its application to the time constrained social influence maximization problem and beyond, Knowledge and Data Engineering, IEEE Transactions on, 26 (2014), 1904-1917. doi: 10.1109/TKDE.2013.106. Google Scholar L. Liu, J. Tang, J. Han, M. Jiang and S. Yang, Mining topic-level influence in heterogeneous networks, in Proceedings of the 19th ACM International Conference on Information and Knowledge Management, ACM, Toronto, ON, Canada, 2010,199–208. doi: 10.1145/1871437.1871467. Google Scholar L. Liu, J. Tang, J. Han and S. Yang, Learning influence from heterogeneous social networks, Data Mining and Knowledge Discovery, 25 (2012), 511-544. doi: 10.1007/s10618-012-0252-3. Google Scholar Q. Liu, B. Xiang, E. Chen, H. Xiong, F. Tang and J. X. Yu, Influence maximization over large-scale social networks: A bounded linear approach, CIKM '14 Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, (2014), 171-180. doi: 10.1145/2661829.2662009. Google Scholar X. Liu, M. Li, S. Li, S. Peng, X. Liao and X. Lu, Imgpu: Gpu accelerated influence maximization in large-scale social networks. Google Scholar X. Liu, S. Li, X. Liao, L. Wang and Q. Wu, In-time estimation for influence maximization in large-scale social networks, in SNS '12 Proceedings of the Fifth Workshop on Social Network Systems, 2012, Article No. 3, 1–6. doi: 10.1145/2181176.2181179. Google Scholar T. Lou and J. Tang, Mining structural hole spanners through information diffusion in social networks, 2013,825–836. Google Scholar T. Lou, J. Tang, J. Hopcroft, Z. Fang and X. Ding, Learning to predict reciprocity and triadic closure in social networks, ACM Trans. Knowl. Discov. Data, 7 (2013), 1-25. doi: 10.1145/2499907.2499908. Google Scholar J. -L. Lu, L. -Y. Wei and M. -Y. Yeh, Influence maximization in a social network in the presence of multiple influences and acceptances, 2014. Google Scholar W. Lu, F. Bonchi, A. Goyal and L. V. S. Lakshmanan, The bang for the buck: Fair competitive viral marketing from the host perspective, in Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, Chicago, Illinois, USA, 2013,928–936. doi: 10.1145/2487575.2487649. Google Scholar Y. Lu, J. Ren, J. Qian, M. Han, Y. Huo and T. Jing, Predictive contention window-based broadcast collision mitigation strategy for vanet, in Social Computing and Networking (SocialCom), 2016 IEEE International Conferences on, IEEE, 2016,209–215. doi: 10.1109/BDCloud-SocialCom-SustainCom.2016.41. Google Scholar Y. Lu, Y. Zhu, M. Han, J. S. He and Y. Zhang, A survey of gpu accelerated svm, in Proceedings of the 2014 ACM Southeast Regional Conference, ACM, 2014, Article No. 15. doi: 10.1145/2638404.2638474. Google Scholar Z. Lu, L. Fan, W. Wu, B. Thuraisingham and K. Yang, Efficient influence spread estimation for influence maximization under the linear threshold model, Computational Social Networks, 1 (2014), 1-19. doi: 10.1186/s40649-014-0002-3. Google Scholar B. Lucier, J. Oren and Y. Singer, Influence at scale: Distributed computation of complex contagion in networks, KDD '15 Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2015), 735-744. doi: 10.1145/2783258.2783334. Google Scholar Z. Z. M Han J Li, K-close: Algorithm for finding the close regions in wireless sensor networks based uncertain graph mining technology, Journal of Software, 22 (2011), 131-141. Google Scholar K. Macropol and A. Singh, Scalable discovery of best clusters on large graphs, Proceedings of the VLDB Endowment, 3 (2010), 693-702. doi: 10.14778/1920841.1920930. Google Scholar A. S. Maiya and T. Y. Berger-Wolf, Inferring the maximum likelihood hierarchy in social networks, in Proceedings of the 2009 International Conference on Computational Science and Engineering, vol. 4, IEEE Computer Society, 2009,245–250. doi: 10.1109/CSE.2009.235. Google Scholar M. McPherson, L. Smith-Lovin and J. M. Cook, Birds of a feather: Homophily in social networks, Annual Review of Sociology, 27 (2001), 415-444. doi: 10.1146/annurev.soc.27.1.415. Google Scholar I. Mele, F. Bonchi and A. Gionis, The early-adopter graph and its application to web-page recommendation, in Proceedings of the 21st ACM International Conference on Information and Knowledge Management, ACM, Maui, Hawaii, USA, 2012, 1682–1686. doi: 10.1145/2396761.2398497. Google Scholar S. Mihara, S. Tsugawa and H. Ohsaki, Influence maximization problem for unknown social networks, ASONAM '15 Proceedings of the 2015 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, (2015), 1539-1546. doi: 10.1145/2808797.2808885. Google Scholar S. Milgram, The small world problem, Psychology today, 2 (1967), 60-67. Google Scholar A. Mislove, H. S. Koppula, K. P. Gummadi, P. Druschel and B. Bhattacharjee, Growth of the flickr social network, 2008, 25–30. Google Scholar A. Mislove, M. Marcon, K. P. Gummadi, P. Druschel and B. Bhattacharjee, Measurement and analysis of online social networks, IMC '07 Proceedings of the 7th ACM SIGCOMM conference on Internet measurement, (2007), 29-42. doi: 10.1145/1298306.1298311. Google Scholar E. Mossel and S. Roch, On the submodularity of influence in social networks, in Proceedings of the Thirty-Ninth Annual ACM Symposium on Theory of Computing, ACM, San Diego, California, USA, 2007,128–134. doi: 10.1145/1250790.1250811. Google Scholar E. Mossel and G. Schoenebeck, Reaching consensus on social networks, 2010,214–229. Google Scholar S. A. Myers and J. Leskovec, On the convexity of latent social network inference, threshold, 9 (2010), 20. Google Scholar S. A. Myers and J. Leskovec, The bursty dynamics of the twitter information network, WWW '14 Proceedings of the 23rd International Conference on World Wide Web, (2014), 913-924. doi: 10.1145/2566486.2568043. Google Scholar S. A. Myers, C. Zhu and J. Leskovec, Information diffusion and external influence in networks, in Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, Beijing, China, 2012, 33–41. doi: 10.1145/2339530.2339540. Google Scholar G. L. Nemhauser, L. A. Wolsey and M. L. Fisher, An analysis of approximations for maximizing submodular set functions. Ⅰ, Mathematical Programming, 14 (1978), 265-294. doi: 10.1007/BF01588971. Google Scholar M. E. Newman, Spread of epidemic disease on networks, Physical Review E, 66 (2002), 016128, 11pp. doi: 10.1103/PhysRevE.66.016128. Google Scholar M. E. Newman, The structure and function of complex networks, SIAM Review, 45 (2003), 167-256. doi: 10.1137/S003614450342480. Google Scholar N. P. Nguyen, T. N. Dinh, X. Ying and M. T. Thai, Adaptive algorithms for detecting community structure in dynamic social networks, 2011 Proceedings IEEE INFOCOM, 2011. doi: 10.1109/INFCOM.2011.5935045. Google Scholar N. P. Nguyen, G. Yan, M. T. Thai and S. Eidenbenz, Containment of misinformation spread in online social networks, in Proceedings of the 3rd Annual ACM Web Science Conference, ACM, Evanston, Illinois, 2012,213–222. doi: 10.1145/2380718.2380746. Google Scholar J. Ok, Y. Jin, J. Choi, J. Shin and Y. Yi, Influence maximization over strategic diffusion in social networks, 2014 48th Annual Conference on Information Sciences and Systems (CISS), 2014. doi: 10.1109/CISS.2014.6814155. Google Scholar J. P. Onnela and F. Reed-Tsochas, Spontaneous emergence of social influence in online systems, Proc Natl Acad Sci U S A, 107 (2010), 18375-18380. doi: 10.1073/pnas.0914572107. Google Scholar L. Page, S. Brin, R. Motwani and T. Winograd, The pagerank citation ranking: Bringing order to the web. Google Scholar W. Pan, W. Dong, M. Cebrian, T. Kim, J. H. Fowler and A. S. Pentland, Modeling dynamical influence in human interaction: Using data to make better inferences about influence within social systems, Signal Processing Magazine, IEEE, 29 (2012), 77-86. Google Scholar P. Parchas, F. Gullo, D. Papadias and F. Bonchi, The pursuit of a good possible world: Extracting representative instances of uncertain graphs, SIGMOD '14 Proceedings of the 2014 ACM SIGMOD International Conference on Management of Data, (2014), 967-978. doi: 10.1145/2588555.2593668. Google Scholar F. Paulsen, Tönnies, ferdinand. gemeinschaft und gesellschaft. abhandlung des communismus und des socialismus als empirischer culturformen. leipzig, fues's verlag, 1887, Vierteljahresschrift Für Wissenschaftliche Philosophie, 12 (1888), 111-119. Google Scholar G. Ritzer and Others, The Blackwell Encyclopedia of Sociology vol. 1479, Blackwell Publishing Malden, MA, 2007. Google Scholar M. G. Rodriguez, D. Balduzzi and B. Sch O Lkopf, Uncovering the temporal dynamics of diffusion networks, arXiv preprint, arXiv: 1105.0697. Google Scholar M. G. Rodriguez, J. Leskovec and A. Krause, Inferring networks of diffusion and influence, in Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, Washington, DC, USA, 2010, 1019–1028. doi: 10.1145/1835804.1835933. Google Scholar M. G. Rodriguez and B. Sch O Lkopf, Influence maximization in continuous time diffusion networks, arXiv preprint, arXiv: 1205.1682. Google Scholar D. M. Romero, W. Galuba, S. Asur and B. A. Huberman, Influence and passivity in social media, in Proceedings of the 20th International Conference Companion on World Wide Web, ACM, Hyderabad, India, 2011,113–114. doi: 10.1145/1963192.1963250. Google Scholar Y. Rong, X. Wen and H. Cheng, A monte carlo algorithm for cold start recommendation, WWW '14 Proceedings of the 23rd International Conference on World Wide Web, (2014), 327-36. doi: 10.1145/2566486.2567978. Google Scholar J. N. Rosenquist, J. H. Fowler and N. A. Christakis, Social network determinants of depression, Molecular Psychiatry, 16 (2011), 273-281. doi: 10.1038/mp.2010.13. Google Scholar R. A. Rossi, B. Gallagher, J. Neville and K. Henderson, Modeling dynamic behavior in large evolving graphs, in Proceedings of the Sixth ACM International Conference on Web Search and Data Mining, ACM, Rome, Italy, 2013,667–676. doi: 10.1145/2433396.2433479. Google Scholar M. Russell, Mining the Social Web: Analyzing Data from Facebook, Twitter, LinkedIn, and Other Social Media Sites, O'Reilly Media, 2011. Google Scholar K. Saito, M. Kimura, K. Ohara and H. Motoda, Efficient discovery of influential nodes for sis models in social networks, Knowledge and Information Systems, 30 (2012), 613-635. doi: 10.1007/s10115-011-0396-2. Google Scholar K. Saito, M. Kimura, K. Ohara and H. Motoda, Learning asynchronous-time information diffusion models and its application to behavioral data analysis over social networks, Journal of Computer Engineering and Informatics, 1 (2013), 30–57, arXiv: 1204.4528. doi: 10.5963/JCEI0102002. Google Scholar K. Saito, R. Nakano and M. Kimura, Prediction of information diffusion probabilities for independent cascade model, Knowledge-Based Intelligent Information and Engineering Systems, 5179 (2008), 67-75. doi: 10.1007/978-3-540-85567-5_9. Google Scholar M. Salath E, M. Kazandjieva, J. W. Lee, P. Levis, M. W. Feldman and J. H. Jones, A high-resolution human contact network for infectious disease transmission, Proceedings of the National Academy of Sciences, 107 (2010), 22020-22025. Google Scholar D. Sheldon, B. Dilkina, A. N. Elmachtoub, R. Finseth, A. Sabharwal, J. Conrad, C. P. Gomes, D. Shmoys, W. Allen, O. Amundsen and Others, Maximizing the spread of cascades using network design, arXiv preprint, arXiv: 1203.3514. Google Scholar T. Shi, S. Cheng, Z. Cai, Y. Li and J. Li, Retrieving the maximal time-bounded positive influence set from social networks, Personal and Ubiquitous Computing, 20 (2016), 717-730. doi: 10.1007/s00779-016-0943-7. Google Scholar H. Shiokawa, Y. Fujiwara and M. Onizuka, Scan++: efficient algorithm for finding clusters, hubs and outliers on large-scale graphs, Proceedings of the VLDB Endowment, 8 (2015), 1178-1189. doi: 10.14778/2809974.2809980. Google Scholar X. Shuai, Y. Ding, J. Busemeyer, S. Chen, Y. Sun and J. Tang, Modeling indirect influence on twitter, International Journal on Semantic Web and Information Systems (IJSWIS), 8 (2012), 20-36. doi: 10.4018/jswis.2012100102. Google Scholar Y. Singer, How to win friends and influence people, truthfully: Influence maximization mechanisms for social networks, in Proceedings of the Fifth ACM International Conference on Web Search and Data Mining, ACM, Seattle, Washington, USA, 2012,733–742. doi: 10.4018/jswis.2012100102. Google Scholar R. Sipos, A. Ghosh and T. Joachims, Was this review helpful to you?: It depends! context and voting patterns in online content, WWW '14 Proceedings of the 23rd International Conference on World Wide Web, (2014), 337-348. doi: 10.1145/2566486.2567998. Google Scholar D. Song and D. A. Meyer, A model of consistent node types in signed directed social networks, 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2014), 2014. doi: 10.1109/ASONAM.2014.6921562. Google Scholar G. Song, X. Zhou, Y. Wang and K. Xie, Influence maximization on large-scale mobile social network: A divide-and-conquer method, Parallel and Distributed Systems, IEEE Transactions on, 26 (2015), 1379-1392. doi: 10.1109/TPDS.2014.2320515. Google Scholar J. Stehl E, N. Voirin, A. Barrat, C. Cattuto, L. Isella, J. -F. C. C. O. Pinton, M. Quaggiotto, W. Van den Broeck, C. R E Gis, B. Lina and Others, High-resolution measurements of face-to-face contact patterns in a primary school, PloS one, 6 (2011), 23176. Google Scholar J. Sun and J. Tang, A survey of models and algorithms for social influence analysis, in Social Network Data Analytics, Springer, 2011,177–214. doi: 10.1007/978-1-4419-8462-3_7. Google Scholar T. Sun, W. Chen, Z. Liu, Y. Wang, X. Sun, M. Zhang and C. -Y. Lin, Participation maximization based on social influence in online discussion forums, 2011. Google Scholar F. Tang, Q. Liu, H. Zhu, E. Chen and F. Zhu, Diversified social influence maximization, 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2014), 2014. doi: 10.1109/ASONAM.2014.6921625. Google Scholar J. Tang, J. Sun, C. Wang and Z. Yang, Social influence analysis in large-scale networks, in Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM, Paris, France, 2009,807–816, 1557108. doi: 10.1145/1557019.1557108. Google Scholar J. Tang, B. Wang, Y. Yang, P. Hu, Y. Zhao, X. Yan, B. Gao, M. Huang, P. Xu, W. Li and Others, Patentminer: topic-driven patent analysis and mining, KDD '12 Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2012, 1366–1374. doi: 10.1145/2339530.2339741. Google Scholar J. Tang, S. Wu and J. Sun, Confluence: Conformity influence in large social networks, in Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, Chicago, Illinois, USA, 2013,347–355. doi: 10.1145/2487575.2487691. Google Scholar J. Tang, C. Zhang, K. Cai, L. Zhang and Z. Su, Sampling representative users from large social networks, 2015. Google Scholar J. Tang, Y. Zhang, J. Sun, J. Rao, W. Yu, Y. Chen and A. C. M. Fong, Quantitative study of individual emotional states in social networks, Affective Computing, IEEE Transactions on, 3 (2012), 132-144. Google Scholar X. Tang and C. C. Yang, Ranking user influence in healthcare social media, ACM Trans. Intell. Syst. Technol., 3 (2012), Article No. 73. doi: 10.1145/2337542.2337558. Google Scholar Y. Tang, X. Xiao and Y. Shi, Influence maximization: Near-optimal time complexity meets practical efficiency, SIGMOD '14 Proceedings of the 2014 ACM SIGMOD International Conference on Management of Data, (2014), 75-86. doi: 10.1145/2588555.2593670. Google Scholar J. Teevan, D. Ramage and M. R. Morris, Twittersearch: A comparison of microblog search and web search, WSDM '11 Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, (2011), 35-44. doi: 10.1145/1935826.1935842. Google Scholar G. Tong, W. Wu, S. Tang and D. -Z. Du, Adaptive influence maximization in dynamic social networks, IEEE/ACM Transactions on Networking, 25 (2017), 112–125, arXiv: 1506.06294. doi: 10.1109/TNET.2016.2563397. Google Scholar W. Tong, R. Goebel and G. Lin, Smoothed heights of tries and patricia tries, Theoretical Computer Science, 609 (2016), 620-626. doi: 10.1016/j.tcs.2015.02.009. Google Scholar H. Trottier and P. Philippe, Deterministic modeling of infectious diseases: theory and methods, The Internet Journal of Infectious Diseases, 1 (2001), 3. Google Scholar J. Tsai, T. H. Nguyen and M. Tambe, Security games for controlling contagion, 2012. Google Scholar W. Verbeke, D. Martens and B. Baesens, Social network analysis for customer churn prediction, Applied Soft Computing, 14 (2014), 431-446. doi: 10.1016/j.asoc.2013.09.017. Google Scholar J. Videras, A. L. Owen, E. Conover and S. Wu, The influence of social relationships on pro-environment behaviors, Journal of Environmental Economics and Management, 63 (2012), 35-50. doi: 10.1016/j.jeem.2011.07.006. Google Scholar B. Viswanath, A. Mislove, M. Cha and K. P. Gummadi, On the evolution of user interaction in facebook, WOSN '09 Proceedings of the 2nd ACM Workshop on Online Social Networks, (2009), 37-472. doi: 10.1145/1592665.1592675. Google Scholar R. W O Lfer and H. Scheithauer, Social influence and bullying behavior: Intervention-based network dynamics of the fairplayer. manual bullying prevention program, Aggressive behavior. Google Scholar C. Wang, W. Chen and Y. Wang, Scalable influence maximization for independent cascade model in large-scale social networks, Data Mining and Knowledge Discovery, 25 (2012), 545-576. doi: 10.1007/s10618-012-0262-1. Google Scholar F. Wang, E. Camacho and K. Xu, Positive influence dominating set in online social networks, in Proceedings of the 3rd International Conference on Combinatorial Optimization and Applications, Springer-Verlag, Huangshan, China, 5573 (2009), 313–321. doi: 10.1007/978-3-642-02026-1_29. Google Scholar G. Wang, Q. Hu and P. S. Yu, Influence and similarity on heterogeneous networks, in Proceedings of the 21st ACM International Conference on Information and Knowledge Management, ACM, Maui, Hawaii, USA, 2012, 1462–1466. doi: 10.1145/2396761.2398453. Google Scholar Y. Wang, G. Cong, G. Song and K. Xie, Community-based greedy algorithm for mining top-k influential nodes in mobile social networks, in Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, Washington, DC, USA, 2010, 1039–1048. doi: 10.1145/1835804.1835935. Google Scholar D. J. Watts and S. H. Strogatz, Collective dynamics of "small-world" networks, The Structure and Dynamics of Networks, (2011), 301-303. doi: 10.1515/9781400841356.301. Google Scholar J. Weng, E. P. Lim, J. Jiang and Q. He, Twitterrank: finding topic-sensitive influential twitterers, WSDM '10 Proceedings of the Third ACM International Conference on Web Search and Data Mining, (2010), 261-270. doi: 10.1145/1718487.1718520. Google Scholar C. Wilson, A. Sala, K. P. N. Puttaswamy and B. Y. Zhao, Beyond social graphs: User interactions in online social networks and their implications, ACM Trans. Web, 6 (2012), 1-31. doi: 10.1145/2382616.2382620. Google Scholar M. Workman, New media and the changing face of information technology use: The importance of task pursuit, social influence, and experience, Computers in Human Behavior, 31 (2014), 111-117. doi: 10.1016/j.chb.2013.10.008. Google Scholar S. Wu, J. Sun and J. Tang, Patent partner recommendation in enterprise social networks, in Proceedings of the Sixth ACM International Conference on Web search and Data Mining, ACM, Rome, Italy, 2013, 43–52. doi: 10.1145/2433396.2433404. Google Scholar X. Xu, N. Yuruk, Z. Feng and T. A. J. Schweiger, Scan: a structural clustering algorithm for networks, KDD '07 Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2007), 824-833. doi: 10.1145/1281192.1281280. Google Scholar M. Yan, M. Han, C. Ai, Z. Cai and Y. Li, Data aggregation scheduling in probabilistic wireless networks with cognitive radio capability, in IEEE GLOBECOM 2016, 2016. doi: 10.1109/GLOCOM.2016.7841716. Google Scholar M. Yan, S. Ji, M. Han, Y. Li and Z. Cai, Data aggregation scheduling in wireless networks with cognitive radio capability, in Sensing, Communication, and Networking (SECON), 2014 Eleventh Annual IEEE International Conference on, IEEE, 2014,513–521. Google Scholar D. -N. Yang, H. -J. Hung, W. -C. Lee and W. Chen, Maximizing acceptance probability for active friending in on-line social networks, arXiv preprint, arXiv: 1302.7025. Google Scholar Y. Yang, J. Jia, S. Zhang, B. Wu, Q. Chen, J. Li, C. Xing and J. Tang, How do your friends on social media disclose your emotions?, 2014. Google Scholar Y. Yang, J. Tang, C. Leung, Y. Sun, Q. Chen, J. Li and Q. Yang, Rain: Social role-aware information diffusion, 2015. Google Scholar Z. Yang, J. Tang, B. Xu and C. Xing, Active learning for networked data based on non-progressive diffusion model, WSDM '14 Proceedings of the 7th ACM International Conference on Web Search and Data Mining, (2014), 363-372. doi: 10.1145/2556195.2556223. Google Scholar H. Yoganarasimhan, Impact of social network structure on content propagation: A study using youtube data, Quantitative Marketing and Economics, 10 (2012), 111-150. doi: 10.1007/s11129-011-9105-4. Google Scholar H. Yu, S. -K. Kim and J. Kim, Scalable and parallelizable processing of influence maximization for large-scale social networks?, in Proceedings of the 2013 IEEE International Conference on Data Engineering (ICDE 2013), IEEE Computer Society, 2013,266–277. Google Scholar X. Yu, X. Ren, Y. Sun, B. Sturt, U. Khandelwal, Q. Gu, B. Norick and J. Han, Recommendation in heterogeneous information networks with implicit user feedback, RecSys '13 Proceedings of the 7th ACM Conference on Recommender Systems, (2013), 347-350. doi: 10.1145/2507157.2507230. Google Scholar Y. Yu, T. Y. Berger-Wolf, J. Saia and Others, Finding spread blockers in dynamic networks, in Advances in Social Network Mining and Analysis, Springer, 2010, 55–76. Google Scholar H. Zhang, A. D. Procaccia and Y. Vorobeychik, Dynamic influence maximization under increasing returns to scale, 2015. Google Scholar H. Zhang, T. N. Dinh and M. T. Thai, Maximizing the spread of positive influence in online social networks, 2013 IEEE 33rd International Conference on Distributed Computing Systems, 2013. doi: 10.1109/ICDCS.2013.37. Google Scholar H. Zhang, S. Mishra and M. T. Thai, Recent advances in information diffusion and influence maximization of complex social networks, Opportunistic Mobile Social Networks, 37. Google Scholar J. Zhang, B. Liu, J. Tang, T. Chen and J. Li, Social influence locality for modeling retweeting behaviors, in Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, AAAI Press, Beijing, China, 2013, 2761–2767. Google Scholar J. Zhang, J. Tang, C. Ma, H. Tong, Y. Jing and J. Li, Panther: Fast top-k similarity search in large networks, KDD '15 Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015, 1445–1454, arXiv: 1504.02577. doi: 10.1145/2783258.2783267. Google Scholar J. Zhang, J. Tang, H. Zhuang, C. W. -K. Leung and J. Li, Role-aware conformity influence modeling and analysis in social networks, 2014. Google Scholar M. Zhang, J. Tang, X. Zhang and X. Xue, Addressing cold start in recommender systems: A semi-supervised co-training algorithm, SIGIR '14 Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval, (2014), 73-82. doi: 10.1145/2600428.2609599. Google Scholar P. Zhang, W. Chen, X. Sun, Y. Wang and J. Zhang, Minimizing seed set selection with probabilistic coverage guarantee in a social network, KDD '14 Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, (2014), 1306-1315. doi: 10.1145/2623330.2623684. Google Scholar J. Zhao, J. Wu, X. Feng, H. Xiong and K. Xu, Information propagation in online social networks: A tie-strength perspective, Knowledge and Information Systems, 32 (2012), 589-608. doi: 10.1007/s10115-011-0445-x. Google Scholar C. Zhou, P. Zhang, W. Zang and L. Guo, Maximizing the cumulative influence through a social network when repeat activation exists, Procedia Comput er Science, 29 (2014), 422-431. doi: 10.1016/j.procs.2014.05.038. Google Scholar Y. Zhou and L. Liu, Social influence based clustering of heterogeneous information networks, in Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, Chicago, Illinois, USA, 2013,338–346. doi: 10.1145/2487575.2487640. Google Scholar H. Zhu, B. Huberman and Y. Luon, To switch or not to switch: Understanding social influence in online choices, in CHI '12, CHI '12, ACM, New York, NY, USA, 2012, 2257– 2266. doi: 10.1145/2207676.2208383. Google Scholar H. Zhuang, Y. Sun, J. Tang, J. Zhang and X. Sun, Influence maximization in dynamic social networks, 2013 IEEE 13th International Conference on Data Mining, (2013), 1313-1318. doi: 10.1109/ICDM.2013.145. Google Scholar Figure 1. Publications Related Influence Analysis in the Recent Years Figure Options Download as PowerPoint slide Figure 2. A Social Network Figure 3. Common Neighbors in a Community Figure 4. Models of Social Influence by Different Networks Figure 5. Diffusion Models of Social Influence Figure 6. Probing Community for Dynamic Network Figure 7. Influence Maximization Models in Social Network Figure 8. Influence Diffusion Processing 1. Figure 10. Heterogeneous Models of Social Influence Figure 11. Models of Social Influence based on Biological Transmission Figure 12. Comprehensive Models of Social Influence Figure 13. Influence Analysis based Applications Table 1. Extensions or improvements of $IC/LT$ models References Extender $IC$ $LT$ Remarks Goyal [71] Learnt probability from the action log, simulation on both $IC$ and $LT$ models $\surd$ $\surd$ Chen et al.[42] Address the scalability issue, they proposed efficiency heuristic algorithm by restricting computations on the local influence regions of nodes $\surd$ $\times$ Showed that computing influence spread in the independent cascade model is #P-hard problem Chen et al.[41] Extended the classical $IC$ model to study time-delayed influence diffusion $\surd$ $\times$ Their technical report version paper provides the NP-complete hardness of LT with their time-delay feature Masahiro et al. [127] Improved the basic $IC$ and $LT$ by estimating marginal influence degrees $\surd$ $\surd$ Chen et al. [43] Degree discount heuristics achieve almost matching influence thread with the greedy algorithm, and run only in milliseconds which the traditional method run in hours $\surd$ $\surd$ Wang et al. [236] Heuristic algorithm for $IC$ model $\surd$ $\times$ Chen et al. [37] Extended the classical $IC$ model to incorporating negative opinions $\surd$ $\times$ Nam et al. [188] Focused on how to limit viral propagation of misinformation in OSNs $\surd$ $\surd$ Wang et al. [239] Extended $IC$ to mobile social networks, and use a dynamic programming algorithm to select communities then find influential nodes $\surd$ $\surd$ Kyomin et al. [121] Algorithm IRIE where IR for influence ranking, and IE for influence maximization are proposed to improve the classical algorithm developed previously $\surd$ $\times$ The algorithm was used in both classical $IC$ model and the extension $IC$-$N$ [37] Thang [58] Extended the $LT$ model by constrain the influence distance as constant $d$ $\times$ $\surd$ Chen et al. [44] A scalable heuristic algorithm for $LT$ were developed by constructing a local directed acyclic graphs (DAGs) $\times$ $\surd$ Showed that computing influence spread in the linear threshold model is #P-hard problem Borodin et al. [22] Introduced $K$-$LT$ as the extension of $LT$ involved the competition of influence $\times$ $\surd$ He et al. [104] Under the $LT$ model, they extended it to influence blocking maximization problem $\times$ $\surd$ Goyal et al. [77] Improved the $LT$ by cutting down on the number of calls made in the first iteration which is the key to estimation procedure. $\times$ $\surd$ Goyal et al. [74] Under both $IC$ and $LT$ model, pursing the alternative goals which motivated by resource and time constraints $\surd$ $\surd$ Barbieri et al. [16] Extended both $IC$ and $LT$ to topic-aware models $\surd$ $\surd$ Wang et al. [238] Extended $IC$ to incorporate similarity in social network $\surd$ $\times$ Rodriguez et al. [198] General case of $IC$ model with time constraint $\surd$ $\times$ Mirela Domijan, Markus Kirkilionis. Graph theory and qualitative analysis of reaction networks. Networks & Heterogeneous Media, 2008, 3 (2) : 295-322. doi: 10.3934/nhm.2008.3.295 Shuping Li, Zhen Jin. Impacts of cluster on network topology structure and epidemic spreading. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3749-3770. doi: 10.3934/dcdsb.2017187 Liu Hui, Lin Zhi, Waqas Ahmad. Network(graph) data research in the coordinate system. Mathematical Foundations of Computing, 2018, 1 (1) : 1-10. doi: 10.3934/mfc.2018001 Deena Schmidt, Janet Best, Mark S. Blumberg. Random graph and stochastic process contributions to network dynamics. Conference Publications, 2011, 2011 (Special) : 1279-1288. doi: 10.3934/proc.2011.2011.1279 M. D. König, Stefano Battiston, M. Napoletano, F. Schweitzer. On algebraic graph theory and the dynamics of innovation networks. Networks & Heterogeneous Media, 2008, 3 (2) : 201-219. doi: 10.3934/nhm.2008.3.201 Wu Chanti, Qiu Youzhen. A nonlinear empirical analysis on influence factor of circulation efficiency. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 929-940. doi: 10.3934/dcdss.2019062 Mario Roy, Mariusz Urbański. Multifractal analysis for conformal graph directed Markov systems. Discrete & Continuous Dynamical Systems, 2009, 25 (2) : 627-650. doi: 10.3934/dcds.2009.25.627 Barton E. Lee. Consensus and voting on large graphs: An application of graph limit theory. Discrete & Continuous Dynamical Systems, 2018, 38 (4) : 1719-1744. doi: 10.3934/dcds.2018071 Yusra Bibi Ruhomally, Muhammad Zaid Dauhoo, Laurent Dumas. A graph cellular automaton with relation-based neighbourhood describing the impact of peer influence on the consumption of marijuana among college-aged youths. Journal of Dynamics & Games, 2021, 8 (3) : 277-297. doi: 10.3934/jdg.2021011 Leon Petrosyan, David Yeung. Shapley value for differential network games: Theory and application. Journal of Dynamics & Games, 2021, 8 (2) : 151-166. doi: 10.3934/jdg.2020021 Luigi Fontana, Steven G. Krantz and Marco M. Peloso. Hodge theory in the Sobolev topology for the de Rham complex on a smoothly bounded domain in Euclidean space. Electronic Research Announcements, 1995, 1: 103-107. A. C. Eberhard, J-P. Crouzeix. Existence of closed graph, maximal, cyclic pseudo-monotone relations and revealed preference theory. Journal of Industrial & Management Optimization, 2007, 3 (2) : 233-255. doi: 10.3934/jimo.2007.3.233 Mark G. Burch, Karly A. Jacobsen, Joseph H. Tien, Grzegorz A. Rempała. Network-based analysis of a small Ebola outbreak. Mathematical Biosciences & Engineering, 2017, 14 (1) : 67-77. doi: 10.3934/mbe.2017005 Chun Zong, Gen Qi Xu. Observability and controllability analysis of blood flow network. Mathematical Control & Related Fields, 2014, 4 (4) : 521-554. doi: 10.3934/mcrf.2014.4.521 Yuanyuan Jiao, Yuhang Li, Jun Du. Analysis on the influence of retailer's introduction of store brand under manufacturer's product line strategy. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021220 Ross Cressman, Vlastimil Křivan. Using chemical reaction network theory to show stability of distributional dynamics in game theory. Journal of Dynamics & Games, 2021 doi: 10.3934/jdg.2021030 Huan Su, Pengfei Wang, Xiaohua Ding. Stability analysis for discrete-time coupled systems with multi-diffusion by graph-theoretic approach and its application. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 253-269. doi: 10.3934/dcdsb.2016.21.253 Daniel Roggen, Martin Wirz, Gerhard Tröster, Dirk Helbing. Recognition of crowd behavior from mobile sensors with pattern analysis and graph clustering methods. Networks & Heterogeneous Media, 2011, 6 (3) : 521-544. doi: 10.3934/nhm.2011.6.521 Rui Hu, Yuan Yuan. Stability, bifurcation analysis in a neural network model with delay and diffusion. Conference Publications, 2009, 2009 (Special) : 367-376. doi: 10.3934/proc.2009.2009.367 Gheorghe Craciun, Baltazar Aguda, Avner Friedman. Mathematical Analysis Of A Modular Network Coordinating The Cell Cycle And Apoptosis. Mathematical Biosciences & Engineering, 2005, 2 (3) : 473-485. doi: 10.3934/mbe.2005.2.473 PDF downloads (1383) HTML views (4094) Meng Han Yingshu Li
CommonCrawl
Source for roots of matrix polynomials? A matrix polynomial is a polynomial whose variables are square $n \times n$ matrices, let's say with entries in $\mathbb{C}$, and with coefficients in $\mathbb{C}$. I am seeking a source of results on solving such equations. For example, $X^2 =0$ has infinitely many solutions, because, e.g., $$ X = \left( \begin{array}{ccc} 0 & 0 & x \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right) $$ is a solution for all $x \in \mathbb{C}$. (Second example removed, as it did not fit the definition.) There is such a rich set of results on roots of polynomial equations over $\mathbb{C}$ and over $\mathbb{R}$, I am hoping there are analogs when the variables are matrices rather than single elements of fields. But my (superficial) explorations did not uncover a comprehensive source on this topic. To pose a specific question: Q. Are there theorems that yield the number of solutions/roots for such matrix polynomial equations? reference-request matrices polynomials matrix-analysis Joseph O'Rourke Joseph O'RourkeJoseph O'Rourke $\begingroup$ In your second example one of the coefficients is a matrix! $\endgroup$ – Qiaochu Yuan Jun 15 '14 at 2:55 $\begingroup$ Where did you encounter that definition (apart from Wikipedia)? As far as I know, in linear algebra research "matrix polynomial" is used as a synonym for "polynomial matrix", while what you speak about would simply be called "a (scalar) polynomial evaluated in a matrix argument". Sources: Gohberg, Lancaster, Rodman, Matrix Polynomials; Higham, Functions of matrices. $\endgroup$ – Federico Poloni Jun 15 '14 at 7:22 $\begingroup$ Apologies to all for the misleading example! @FedericoPoloni: I did not know the term for these polynomials, searched and found that in Wikipedia. From your references, it looks like Wikipedia needs updating to acknowledge the terminological variations. $\endgroup$ – Joseph O'Rourke Jun 15 '14 at 13:41 As Geoff Robinson says, a Jordan form takes you quite far. Evaluating a scalar polynomial (or an analytic function) $f(x)$ at a Jordan block $J_{\lambda,t}$ of size $t$ and eigenvalue $\lambda$ gives the triangular Toeplitz matrix $$ f(J_{\lambda,t})= \begin{bmatrix} f(\lambda) & f'(\lambda) & f''(\lambda) & \dots & f^{t-1}(\lambda)\\ 0 & f(\lambda) & f'(\lambda) & \ddots & \vdots\\ 0 & 0 & \ddots & \ddots & \vdots \\ \vdots & \vdots & \ddots & \ddots & \vdots\\ 0 & 0 & \dots & 0 & f(\lambda) \end{bmatrix}. $$ So evaluating $f(A)$ where $A$ has Jordan form $A=M (\bigoplus J_{\lambda_i,t_i}) M^{-1}$ gives $M(\bigoplus f(J_{\lambda_i,t_i}))M^{-1}$. In other words, for $f(A)$ to be zero, $f$ needs to satisfy $f(\lambda)=0,f'(\lambda)=0,f''(\lambda)=0,\dots,f^{(t-1)}(\lambda)=0$, for each eigenvalue $\lambda$ of $A$ with algebraic multiplicity $t$. Conversely, given $f$, the matrices at which it vanishes are all those who have eigenvalues equal to the scalar roots of the polynomial, with algebraic multiplicities smaller or equal than their multiplicities as scalar roots of $f$. This is just a complicated way to state that $f$ has to be a multiple of the minimal polynomial of $A$ (which sounds quite obvious). This solves completely your first example; as for the second, it doesn't quite fit your definition, as has been pointed out in the comments. A reference for the computation of $f(J_{\lambda,t})$, and other equivalent definitions of functions evaluated at a matrix argument, is Chapter 1 of Higham, Functions of matrices. 122 silver badges33 bronze badges Federico PoloniFederico Poloni (Covers question as originally asked) Well, in the cases you cite, Jordan normal form already takes you quite far. If you want to solve $p(X) = M $ for $n \times n$ complex matrices $X$ and $M$ and $p(t) \in \mathbb{C}[t],$ then if there is a solution at all, the matrix $X$ must act on each generalized eigenspace of $M,$ as it certainly should commute with $M.$ Hence over an algebraically closed field, you may as well reduce to the case that $M$ has a single eigenvalue. Over a field which is not necessarily algebraically closed, you can reduce via rational canonical form to the case that $M$ has a characteristic polynomial which is a power of a singe irreducible polynomial. But it isn't clear to me what generality you want to work in. Later edit: (Complex case): Solving $p(X) = M$ when $M$ has a single eigenvalue reduces easily to the case when $M$ is nilpotent ( with $p(t)$ replaced by $q(t) - \lambda$ for some scalar $\lambda$). This in turn reduces to piecing together in a consistent fashion solutions to equations $Y^{d} = N^{\prime},$ where both $Y$ and $N^{\prime}$ are nilpotent, and $Y$ has a single Jordan block ( if there is a solution,$M$ must respect any decomposition of the space into indecomposable $X$-invariant summands,and we consider each such indecomposable summand separately). Then it is a question of determining the Jordan normal form of $Y^{d}$ when $Y$ is a nilpotent matrix with a single Jordan block. Note that if $M$ is diagonalizable, there is always a complex solution, and that, in that case, there are finitely many solutions if $M$ has no repeated eigenvalue. Geoff RobinsonGeoff Robinson $\begingroup$ Your remarks are already quite useful to me---Thanks! As is not uncommon, my question reflects some vague confusion. Otherwise I wouldn't be asking! $\endgroup$ – Joseph O'Rourke Jun 15 '14 at 0:58 Well, the Riccati matrix equation and its variants transform (assuming the leading "coefficient" is invertible) to the quadratic $Z^2 + AZ + B = 0$ (where $Z$ is the unknown matrix and $A,B$ are square). There are generically $C(2n,n)$ solutions, although there could be less, or a continuum (but not greater than $C(2n,n)$ and less than $c$); if $A$ and $B$ commute, the generic number is $2^n$. One reference is, D Handelman [me], Fixed points of two-sided fractional matrix transformations, Fixed point theory and its applications, 2007, ID41930, doi:10.1155/2007/41930, which is freely downloadable. It's mainly concerned with fixed points of the densely-defined transformation (on $n \times n$ matrices) $X \mapsto (I - CXD)^{-1}$ (where $C$ and $D$ are given), which can be transformed into the quadratic above, and variations on it. It turns out there is a natural graph structure on the solutions, which is generically the Johnson graph. Edit: Oops, on rereading your question, I see that you required the coefficients to be scalars, which is much more tractible. Oh well, never mind. (If you remember Gilda Radner on SNL ....) David HandelmanDavid Handelman Although your question is formulated over $\mathbb{C}$, I still think it is interesting to point out how number theory comes up when you study this problem over rings, not fields. I will discuss the special situation of the equation $f(M)=0$ with $f$ a separable polynomial (irreducible most of the time). First, let me rephrase the answers given by Geoff Robinson and Federico Poloni: if $k$ is a field and $f$ is a separable polynomial of degree $n$ over $k$, then the equation $f(M)=0$ has - up to conjugacy with matrices from $GL_n(k)$ - a single solution in $M_n(k)$. One particular solution is the companion matrix of $f$, and all the others are obtained by conjugation. Now if we are trying to solve $f(M)$ in $M_n(\mathbb{Z})$, the type of answer changes. There is a classical theorem of Latimer and MacDuffee (C.G. Latimer and C.C. MacDuffee. A correspondence between classes of ideals and classes of matrices. Ann. of Math. (2) 34 (1933), no. 2, 313-316) that sets up a bijective correspondence between conjugacy classes of matrices with characteristic polynomial $f$ and ideal classes in the extension $\mathbb{Z}[\alpha]$, where $\alpha$ is a root of $f$. You can get from ideal classes to matrices by picking a basis of a representing ideal and writing down the representing matrices of multiplication with $\alpha$. A more modern writeup of this correspondence can be found in these notes of Keith Conrad. Using this correspondence, you get finiteness of the number of conjugacy classes from the finiteness of the ideal class group, a basic theorem in algebraic number theory. As a simple application, there are $3$ conjugacy classes of elements of order $23$ in $GL_{22}(\mathbb{Z})$, which may not be obvious without the algebraic number theory background. This correspondence can be generalized beyond the case of $\mathbb{Z}$, but requires more number theory then. Matthias WendtMatthias Wendt A comment on the simplest case of the revised question: Given $\alpha,\beta \in \mathbb{C}$ consider $f(x)=x^2+\alpha x+\beta.$ The solutions in $2 \times 2$ complex matrices $X$ of $f(X)=X^2+\alpha X+\beta I$ are as follows: either $1$ or $4$ solutions in diagonal matrices (according as $f$ has $1$ or $2$ distinct complex roots) A two parameter family of non-diagonal solutions. A diagonal solution must be $X = \left( \begin{array}{cc} a &0 \\ 0 & d \end{array} \right)$ with $f(a)=f(d)=0$ A solution $X = \left( \begin{array}{cc} a &b\\ c & d \end{array} \right)$ with $b,c$ not both $0$ must have $a+d=-\alpha$ and $bc=-(a^2+\alpha a +\beta)$ and $bc=-(d^2+\alpha d+\beta).$ However the first two equation makes the second two equivalent since $a^2+\alpha a=-ad=d^2+\alpha d.$ So the diagonal entries (for this non-diagonal case) come from a line in the $a,d$ plane and this choice forces the off diagonal entries to come from a hyperbola in the $b,c$ plane (which may be the degenerate case $bc=0$ of two perpendicular lines.) Aaron MeyerowitzAaron Meyerowitz F.R. Gantmaher, The theory of matrices is a good reference of this subject, in my opinion. EvgeniyEvgeniy The following sentence by Federico "Conversely, given f, the matrices at which it vanishes are all those who have eigenvalues equal to the scalar roots of the polynomial, with algebraic multiplicities smaller or equal than their multiplicities as scalar roots of f" seems to me unclear. We want to solve the equation (E): $p(X)=0$ - where $p\in K[x]$ - in the unknown $X\in M_n(K)$. i) $K$ is an algebraic closed field. The roots of $p$ are $(\alpha_i)_{i\leq s}$ with multiplicity $(r_i)_{i\leq s}$. Let $J_r$ be the nilpotent Jordan block of dimension $r$. Then $X$ is a solution of (E) IFF $X$ is similar to $diag(U_1,\cdots,U_k)$ for any choice of $k$ and of the dimension $n_j$ of $U_j$ satisfying $n_1+\cdots+n_k=n$ and for any choice of $U_j$ among the matrices of following forms: $\alpha_i I_{n_j}+J_{n_j}$ with $i\leq s$ and $n_j\leq r_i$. ii) $K$ is a field not alg. closed. We factor $p$ in irreducible $p=p_1^{r_1}\cdots p_s^{r_s}$. The result is similar to that of i). It suffices to choose $U_j$ among the companion matrices of ${p_i}^{q}$ where $q=n_j/degree(p_i)$ and $q\leq r_i$. iii) $K$ is an euclidean ring (for example $\mathbb{Z}$). As Matthias wrote, it is much more difficult. We must seek a number of ideal classes ; using Magma software, we can do that but, morover, we must find one representant in each class, that is not obvious. For example, solve $A^3=I_n$ where $A\in M_n(\mathbb{Z})$. Note that, obviously, this Joseph's question has not research level. Yet everyone rushes to give its answer (me also !). Hilarious detail: "il maestro" Joseph has $43800$ points. Compare with this post sent on MO and on MSE by an unknown user - here $K=\mathbb{Z}[i]$ - https://math.stackexchange.com/questions/634655/how-to-find-all-the-solutions-to-ia-cdotsan-0 this question (in my opinion) is difficult and interesting. Yet the post was expelled from MO because it had not the research level. Hilarious detail: this poor unknown user has only $91$ points. La Fontaine, a great French poet (not the same as the poet of Michael Connelly), wrote (in french) "Selon que vous serez puissant ou misérable, Les jugements de cour vous rendront blanc ou noir", that is (in my bad non idiomatic English) "Depending on whether you will be powerful or miserable, Court judgments make you white or black". loup blancloup blanc $\begingroup$ This is not the place to discuss moderation; please open a thread on meta if you have a concern. Apart from that, thanks for formalizing my statement above. The question asks about $\mathbb{C}$, so I did not cover your cases ii) and iii). $\endgroup$ – Federico Poloni Jun 20 '14 at 6:30 $\begingroup$ OK Federico, I understand. $\endgroup$ – loup blanc Jun 21 '14 at 10:50 Not the answer you're looking for? Browse other questions tagged reference-request matrices polynomials matrix-analysis or ask your own question. Rank-1 decomposition conjecture for matrix with linear function elements Are there any algorithms for solving nonlinear matrix equations over $\mathbb{C}$? system of homogeneous matrix equations Anti-bidiagonal matrix with main anti-diagonal {1,2,3,…} and first sub-anti-diagonal {-1,-2,-3,…} has eigenvalues lambda={1,-2,3,-4,…} The $n$th power of a matrix by Companion matrix A Vandermonde-type system Primage structures: induced domain partitioning by itterated inverse (reference request) SDP representation of ideal polynomials for positivstellensatz refutations
CommonCrawl
Analysis of a reaction diffusion model for a reservoir supported spread of infectious disease An interface-free multi-scale multi-order model for traffic flow November 2019, 24(11): 6209-6238. doi: 10.3934/dcdsb.2019136 Portfolio optimization and model predictive control: A kinetic approach Torsten Trimborn 1,, , Lorenzo Pareschi 2, and Martin Frank 3, Institut für Geometrie und Praktische Mathematik, RWTH Aachen, Templergraben 55, 52056 Aachen, Germany Department of Mathematics and Computer Science, University of Ferrara, Via Machiavelli 30, I-44121 Ferrara, Italy Karlsruhe Institute of Technology, Steinbuch Center for Computing, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen, Germany Corresponding author: [email protected] Received June 2018 Revised January 2019 Published July 2019 Figure(10) In this paper, we introduce a large system of interacting financial agents in which all agents are faced with the decision of how to allocate their capital between a risky stock or a risk-less bond. The investment decision of investors, derived through an optimization, drives the stock price. The model has been inspired by the econophysical Levy-Levy-Solomon model [30]. The goal of this work is to gain insights into the stock price and wealth distribution. We especially want to discover the causes for the appearance of power-laws in financial data. We follow a kinetic approach similar to [33] and derive the mean field limit of the microscopic agent dynamics. The novelty in our approach is that the financial agents apply model predictive control (MPC) to approximate and solve the optimization of their utility function. Interestingly, the MPC approach gives a mathematical connection between the two opposing economic concepts of modeling financial agents to be rational or boundedly rational. Furthermore, this is to our knowledge the first kinetic portfolio model which considers a wealth and stock price distribution simultaneously. Due to the kinetic approach, we can study the wealth and price distribution on a mesoscopic level. The wealth distribution is characterized by a log-normal law. For the stock price distribution, we can either observe a log-normal behavior in the case of long-term investors or a power-law in the case of high-frequency trader. Furthermore, the stock return data exhibit a fat-tail, which is a well known characteristic of real financial data. Keywords: Portfolio optimization, kinetic modeling, model predictive control, stylized facts, stock market, bounded rationality. Mathematics Subject Classification: 91Bxx, 91Cxx, 35Qxx, 35Kxx, 37Fxx, 49Nxx, 70Fxx, 70Kxx. Citation: Torsten Trimborn, Lorenzo Pareschi, Martin Frank. Portfolio optimization and model predictive control: A kinetic approach. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 6209-6238. doi: 10.3934/dcdsb.2019136 G. Albi, M. Herty and L. Pareschi, Kinetic description of optimal control problems and applications to opinion consensus, Communications in Mathematical Sciences, 13 (2015), 1407-1429. doi: 10.4310/CMS.2015.v13.n6.a3. Google Scholar G. Albi, L. Pareschi and M. Zanella, Boltzmann-type control of opinion consensus through leaders, Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 372 (2014), 20140138, 18pp. doi: 10.1098/rsta.2014.0138. Google Scholar A. Beja and M. B. Goldman, On the dynamic behavior of prices in disequilibrium, The Journal of Finance, 35 (1980), 235-248. Google Scholar D. Bertsimas and D. Pachamanova, Robust multiperiod portfolio management in the presence of transaction costs, Computers & Operations Research, 35 (2008), 3-17. doi: 10.1016/j.cor.2006.02.011. Google Scholar M. Bisi, G. Spiga and G. Toscani, Kinetic models of conservative economies with wealth redistribution, Communications in Mathematical Sciences, 7 (2009), 901-916. doi: 10.4310/CMS.2009.v7.n4.a5. Google Scholar J.-P. Bouchaud and M. Mézard, Wealth condensation in a simple model of economy, Physica A: Statistical Mechanics and its Applications, 282 (2000), 536-545. doi: 10.1016/S0378-4371(00)00205-3. Google Scholar W. Braun and K. Hepp, The Vlasov dynamics and its fluctuations in the 1/N limit of interacting classical particles, Communications in Mathematical Physics, 56 (1977), 101-113. doi: 10.1007/BF01611497. Google Scholar W. A. Brock and C. H. Hommes, Heterogeneous beliefs and routes to chaos in a simple asset pricing model, Journal of Economic Dynamics and Control, 22 (1998), 1235-1274. doi: 10.1016/S0165-1889(98)00011-6. Google Scholar M. Burger, L. Caffarelli, P. A. Markowich and M.-T. Wolfram, On a Boltzmann-type price formation model, Proc. R. Soc. A, 469 (2013), 20130126, 20pp. doi: 10.1098/rspa.2013.0126. Google Scholar E. Camacho and C. Bordons, Model Predictive Control, Springer, USA, 2004. Google Scholar A. Chatterjee and B. K. Chakrabarti, Kinetic exchange models for income and wealth distributions, The European Physical Journal B-Condensed Matter and Complex Systems, 60 (2007), 135-149. doi: 10.1140/epjb/e2007-00343-8. Google Scholar L. Chayes, M. del Mar González, M. P. Gualdani and I. Kim, Global existence and uniqueness of solutions to a model of price formation, SIAM Journal on Mathematical Analysis, 41 (2009), 2107-2135. doi: 10.1137/090753346. Google Scholar J. Che, A kinetic model on portfolio in finance, Communications in Mathematical Sciences, 9 (2011), 1073-1096. doi: 10.4310/CMS.2011.v9.n4.a7. Google Scholar C. Chiarella, R. Dieci and X.-Z. He, Heterogeneous expectations and speculative behavior in a dynamic multi-asset framework, Journal of Economic Behavior & Organization, 62 (2007), 408-427. Google Scholar D. Colander, H. Föllmer, A. Haas, M. D. Goldberg, K. Juselius, A. Kirman, T. Lux and B. Sloth, The financial crisis and the systemic failure of academic economics, 2009. Google Scholar R. Cont, Empirical properties of asset returns: Stylized facts and statistical issues, 2001. Google Scholar S. Cordier, L. Pareschi and C. Piatecki, Mesoscopic modelling of financial markets, Journal of Statistical Physics, 134 (2009), 161-184. doi: 10.1007/s10955-008-9667-z. Google Scholar S. Cordier, L. Pareschi and G. Toscani, On a kinetic model for a simple market economy, Journal of Statistical Physics, 120 (2005), 253-277. doi: 10.1007/s10955-005-5456-0. Google Scholar R. Cross, M. Grinfeld, H. Lamba and T. Seaman, A threshold model of investor psychology, Physica A: Statistical Mechanics and its Applications, 354 (2005), 463-478. doi: 10.1016/j.physa.2005.02.029. Google Scholar M. Delitala and T. Lorenzi, A mathematical model for value estimation with public information and herding, Kinetic & Related Models, 7 (2014), 29-44. doi: 10.3934/krm.2014.7.29. Google Scholar R. L. Dobrushin, Vlasov equations, Functional Analysis and Its Applications, 13 (1979), 115-123. Google Scholar B. Düring, D. Matthes and G. Toscani, Kinetic equations modelling wealth redistribution: A comparison of approaches, Physical Review E, 78 (2008), 056103, 12pp. doi: 10.1103/PhysRevE.78.056103. Google Scholar E. Egenter, T. Lux and D. Stauffer, Finite-size effects in Monte Carlo simulations of two stock market models, Physica A: Statistical Mechanics and its Applications, 268 (1999), 250-256. doi: 10.1016/S0378-4371(99)00059-X. Google Scholar J. D. Farmer and D. Foley, The economy needs agent-based modelling, Nature, 460 (2009), 685-686. doi: 10.1038/460685a. Google Scholar T. Hellthaler, The influence of investor number on a microscopic market model, International Journal of Modern Physics C, 6 (1996), 845-852. doi: 10.1142/S0129183195000691. Google Scholar D. Kahneman and A. Tversky, Prospect theory: An analysis of decision under risk, Econometrica: Journal of the Econometric Society, 47 (1979), 263-291. doi: 10.2307/1914185. Google Scholar K. Kanazawa, T. Sueshige, H. Takayasu and M. Takayasu, Derivation of the boltzmann equation for financial brownian motion: direct observation of the collective motion of high-frequency traders, Physical Review Letters, 120 (2018), 138301. doi: 10.1103/PhysRevLett.120.138301. Google Scholar K. Kanazawa, T. Sueshige, H. Takayasu and M. Takayasu, Kinetic theory for finance brownian motion from microscopic dynamics, arXiv: 1802.05993, 2018. Google Scholar R. Kohl, The influence of the number of different stocks on the Levy–Levy–Solomon model, International Journal of Modern Physics C, 8 (1997), 1309-1316. doi: 10.1142/S0129183197001168. Google Scholar M. Levy, H. Levy and S. Solomon, A microscopic model of the stock market: Cycles, booms, and crashes, Economics Letters, 45 (1994), 103-111. Google Scholar T. Lux et al., Stochastic Behavioral Asset Pricing Models and the Stylized Facts, Technical report, Economics working paper/Christian-Albrechts-Universität Kiel, Department of Economics, 2008. Google Scholar T. Lux and M. Marchesi, Scaling and criticality in a stochastic multi-agent model of a financial market, Nature, 397 (1999), 498-500. doi: 10.1038/17290. Google Scholar D. Maldarella and L. Pareschi, Kinetic models for socio-economic dynamics of speculative markets, Physica A: Statistical Mechanics and its Applications, 391 (2012), 715-730. doi: 10.1016/j.physa.2011.08.013. Google Scholar H. Markowitz, Portfolio selection, The Journal of Finance, 7 (1952), 77-91. Google Scholar D. Matthes and G. Toscani, Analysis of a model for wealth redistribution, Kinetic and Related Models, 1 (2008), 1-22. doi: 10.3934/krm.2008.1.1. Google Scholar D. Q. Mayne and H. Michalska, Receding horizon control of nonlinear systems, IEEE Trans. Automat. Control, 35 (1990), 814-824. doi: 10.1109/9.57020. Google Scholar R. C. Merton, Lifetime portfolio selection under uncertainty: The continuous-time case, The review of Economics and Statistics, 51 (1969), 247-257. doi: 10.2307/1926560. Google Scholar J. E. Mitchell and S. Braun, Rebalancing an investment portfolio in the presence of convex transaction costs, including market impact costs, Optimization Methods and Software, 28 (2013), 523-542. doi: 10.1080/10556788.2012.717940. Google Scholar H. Neunzert, The Vlasov equation as a limit of Hamiltonian classical mechanical systems of interacting particles, Trans. Fluid Dynamics, 18 (1977), 663-678. Google Scholar A. Pagan, The econometrics of financial markets, Journal of Empirical Finance, 3 (1996), 15-102. doi: 10.1016/0927-5398(95)00020-8. Google Scholar L. Pareschi and G. Toscani, Self-similarity and power-like tails in nonconservative kinetic models, Journal of statistical physics, 124 (2006), 747-779. doi: 10.1007/s10955-006-9025-y. Google Scholar [42] L. Pareschi and G. Toscani, Interacting Multiagent Systems: Kinetic Equations and Monte Carlo Methods, Oxford University Press, 2013. Google Scholar H. A. Simon, A behavioral model of rational choice, The Quarterly Journal of Economics, 69 (1955), 99-118. doi: 10.2307/1884852. Google Scholar D. Sornette, Physics and financial economics (1776–2014): Puzzles, Ising and agent-based models, Reports on Progress in Physics, 77 (2014), 062001, 28pp. doi: 10.1088/0034-4885/77/6/062001. Google Scholar A.-S. Sznitman, Topics in propagation of chaos, In Ecole d'Eté de Probabilités de Saint-Flour XIX-1989, 165–251, Lecture Notes in Math., 1464, Springer, Berlin, 1991. doi: 10.1007/BFb0085169. Google Scholar T. Trimborn, M. Frank and S. Martin, Mean field limit of a behavioral financial market model, Physica A: Statistical Mechanics and its Applications, 505 (2018), 613-631. doi: 10.1016/j.physa.2018.03.079. Google Scholar C. Villani, On a new class of weak solutions to the spatially homogeneous Boltzmann and Landau equations, Archive for Rational Mechanics and Analysis, 143 (1998), 273-307. doi: 10.1007/s002050050106. Google Scholar L. Walras., Études D'économie Politique Appliquée: (Théorie de la Production de la Richesse Sociale), F. Rouge, 1898. Google Scholar E. Zschischang and T. Lux, Some new results on the Levy, Levy and Solomon microscopic stock market model, Physica A: Statistical Mechanics and its Applications, 291 (2001), 563-573. doi: 10.1016/S0378-4371(00)00609-9. Google Scholar Figure 1. Sketch of the modelling process Figure 2. Example of the value function $ U_{\gamma} $ with different reference points Figure 3. Stock price evolution in the long-term investor case with a constant fundamental price $ s^f $ (left figure) and a time varying fundamental price (right figure). In both figures one obtains that the average stock price is above the funcamental value Figure 4. Quantile-quantile plot of logarithmic stock return distribution (left-hand side) and logarithmic return of fundamental prices (right-hand side). The simulation has been performed in the case of long-term investors and a stochastic fundamental price. The risk tolerance has been set to $ \gamma = 0.9 $, the scale to $ \rho = \frac{5}{8} $ and the random seed is chosen to be $\texttt{rng(767)}$. All further parameters are chosen as reported in section A.4 of the Appendix Figure 5. Stock price distribution in the long-term investor case. The solid lines are analytical solution, whereas the circles are the numerical result Figure 6. Distribution of the wealth invested in stocks with a Gaussian fit (solid line). Left figure has a linear scale, whereas the right figure shows the distribution in log-log scale Figure 7. Distribution of the wealth invested in bonds in the special case $ K>0 $. The numerical results (circles) are plotted with the corresponding log-normal analytic self-similar solution (solid lines) Figure 8. Stock price distribution in the high-frequency case (red circles). The fit by the inverse-gamma distribution (solid line) clearly underestimates the tail. This reveals that the full model can create heavier tails than the inverse-gamma distribution Figure 9. Marginal wealth distributions in the high-frequency investor case. The left hand side illustrates the distribution of investments in stocks and the right-hand side the wealth invested in bonds at $ t = 1 $ Figure 10. Steady state stock price distribution in the high-frequency investor case (circles) together with the analytically computed steady state of inverse-gamma type (solid line) Yuan Tan, Qingyuan Cao, Lan Li, Tianshi Hu, Min Su. A chance-constrained stochastic model predictive control problem with disturbance feedback. Journal of Industrial & Management Optimization, 2021, 17 (1) : 67-79. doi: 10.3934/jimo.2019099 Hanyu Gu, Hue Chi Lam, Yakov Zinder. Planning rolling stock maintenance: Optimization of train arrival dates at a maintenance center. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020177 Jiannan Zhang, Ping Chen, Zhuo Jin, Shuanming Li. Open-loop equilibrium strategy for mean-variance portfolio selection: A log-return model. Journal of Industrial & Management Optimization, 2021, 17 (2) : 765-777. doi: 10.3934/jimo.2019133 Sabine Hittmeir, Laura Kanzler, Angelika Manhart, Christian Schmeiser. Kinetic modelling of colonies of myxobacteria. Kinetic & Related Models, 2021, 14 (1) : 1-24. doi: 10.3934/krm.2020046 M. S. Lee, H. G. Harno, B. S. Goh, K. H. Lim. On the bang-bang control approach via a component-wise line search strategy for unconstrained optimization. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 45-61. doi: 10.3934/naco.2020014 Mikhail I. Belishev, Sergey A. Simonov. A canonical model of the one-dimensional dynamical Dirac system with boundary control. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021003 Mahdi Karimi, Seyed Jafar Sadjadi. Optimization of a Multi-Item Inventory model for deteriorating items with capacity constraint using dynamic programming. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021013 Youming Guo, Tingting Li. Optimal control strategies for an online game addiction model with low and high risk exposure. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020347 Juan Pablo Pinasco, Mauro Rodriguez Cartabia, Nicolas Saintier. Evolutionary game theory in mixed strategies: From microscopic interactions to kinetic equations. Kinetic & Related Models, 2021, 14 (1) : 115-148. doi: 10.3934/krm.2020051 Xin Guo, Lexin Li, Qiang Wu. Modeling interactive components by coordinate kernel polynomial models. Mathematical Foundations of Computing, 2020, 3 (4) : 263-277. doi: 10.3934/mfc.2020010 Bernard Bonnard, Jérémy Rouot. Geometric optimal techniques to control the muscular force response to functional electrical stimulation using a non-isometric force-fatigue model. Journal of Geometric Mechanics, 2020 doi: 10.3934/jgm.2020032 Guojie Zheng, Dihong Xu, Taige Wang. A unique continuation property for a class of parabolic differential inequalities in a bounded domain. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020280 Bing Liu, Ming Zhou. Robust portfolio selection for individuals: Minimizing the probability of lifetime ruin. Journal of Industrial & Management Optimization, 2021, 17 (2) : 937-952. doi: 10.3934/jimo.2020005 Junkee Jeon. Finite horizon portfolio selection problems with stochastic borrowing constraints. Journal of Industrial & Management Optimization, 2021, 17 (2) : 733-763. doi: 10.3934/jimo.2019132 Lin Jiang, Song Wang. Robust multi-period and multi-objective portfolio selection. Journal of Industrial & Management Optimization, 2021, 17 (2) : 695-709. doi: 10.3934/jimo.2019130 Chun Liu, Huan Sun. On energetic variational approaches in modeling the nematic liquid crystal flows. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 455-475. doi: 10.3934/dcds.2009.23.455 Jean-Paul Chehab. Damping, stabilization, and numerical filtering for the modeling and the simulation of time dependent PDEs. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021002 Torsten Trimborn Lorenzo Pareschi Martin Frank
CommonCrawl
How is a single qubit fundamentally different from a classical coin spinning in the air? I had asked this question earlier in the comment section of the post: What is a qubit? but none of the answers there seem to address it at a satisfactory level. The question basically is: How is a single qubit in a Bell state $\frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$ any different from a classical coin spinning in the air (on being tossed)? The one-word answer for difference between a system of 2 qubits and a system of 2 classical coins is "entanglement". For instance, you cannot have a system of two coins in the state $\frac{1}{\sqrt 2}|00\rangle+\frac{1}{\sqrt 2}|11\rangle$. The reason is simple: when two "fair" coins are spinning in air, there is always some finite probability that the first coin lands heads-up while the second coin lands tails-up, and also the vice versa is true. In the combined Bell state $\frac{1}{\sqrt 2}|00\rangle+\frac{1}{\sqrt 2}|11\rangle$ that is not possible. If the first qubit turns out to be $|0\rangle$, the second qubit will necessarily be $|1\rangle$. Similarly, if the first qubit turns out to be $|1\rangle$, the second qubit will necessarily turn out to be $|1\rangle$. At, this point someone might point out that if we use $2$ "biased" coins then it might be possible to recreate the combined Bell state. The answer is still no (it's possible to mathematically prove it...try it yourself!). That's because the Bell state cannot be decomposed into a tensor product of two individual qubit states i.e. the two qubits are entangled. While the reasoning for the 2-qubit case is understandable from there, I'm not sure what fundamental reason distinguishes a single qubit from a single "fair" coin spinning in the air. This answer by @Jay Gambetta somewhat gets at it (but is still not satisfactory): This is a good question and in my view gets at the heart of a qubit. Like the comment by @blue, it's not that it can be an equal superposition as this is the same as a classical probability distribution. It is that it can have negative signs. Take this example. Imagine you have a bit in the $0$ state and then you apply a coin flipping operation by some stochastic matrix $\begin{bmatrix}0.5 & 0.5 \\0.5 & 0.5 \end{bmatrix}$ this will make a classical mixture. If you apply this twice it will still be a classical mixture. Now lets got to the quantum case and start with a qubit in the $0$ state and apply a coin flipping operation by some unitary matrix $\begin{bmatrix}\sqrt{0.5} & \sqrt{0.5} \\\sqrt{0.5} & -\sqrt{0.5} \end{bmatrix}$. This makes an equal superposition and you get random outcomes like above. Now applying this twice you get back the state you started with. The negative sign cancels due to interference which cannot be explained by probability theory. Extending this to n qubits gives you a theory that has an exponential that we can't find efficient ways to simulate. This is not just my view. I have seen it shown in talks of Scott Aaronson and I think its best to say quantum is like "Probability theory with Minus Signs" (this is a quote I seen Scott make). I'm not exactly sure how they're getting the unitary matrix $\begin{bmatrix}\sqrt{0.5} & \sqrt{0.5} \\\sqrt{0.5} & -\sqrt{0.5} \end{bmatrix}$ and what the motivation behind that is. Also, they say: "The negative sign cancels due to interference which can not be explained by probability theory." The way they've used the word interference seems very vague to me. It would be useful if someone can elaborate on the logic used in that answer and explain what they actually mean by interference and why exactly it cannot be explained by classical probability. Is it some extension of Bell's inequality for 1-qubit systems (doesn't seem so based on my conversations with the folks in the main chat though)? physical-qubit quantum-state Sanchayan Dutta Sanchayan DuttaSanchayan Dutta How is a single qubit in a Bell state $\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$ any different from a classical coin spinning in the air (on being tossed)? For both of them, the probability of getting heads is 1/2 and getting tails is also 1/2 (we can assume that heads$\equiv|1\rangle$ and tails$\equiv|0\rangle$ and that we are "measuring" in the heads-tails basis). For any 1-qubit state $|\psi\rangle$, if all you do is measure it in the computational basis, you will always be able to explain it in terms of a probability distribution p(heads)$=|\langle 0|\psi\rangle|^2$ and p(tails)$=|\langle 1|\psi\rangle|^2$. The key differences are in using different bases and/or performing unitary evolutions. The classic example is the Mach-Zehnder interferometer. Think of it this way: any 1-bit probabilistic operation is described by a $2\times 2$ stochastic matrix (i.e. all columns sum to 1). Call it $P$. It is easy enough to show that there is no $P$ such that $P^2=X$, where $X$ is the Pauli matrix (in other words, a NOT gate). Thus, there is no probabilistic gate that can be considered the square-root of NOT. On the other hand, we can build such a device. A half-silvered mirror performs the square-root of not action. A half-silvered mirror has two inputs (labelled 0 and 1) and two outputs (also labelled 0 and 1). Each input is a photon coming in a different direction, and it is either reflected or transmitted. If you just look at one half-silvered mirror, then whatever input you give, the output is 50:50 reflected or transmitted. It seems just like the coin you're talking about. However, if you put two of them together, if you input 0, or always get the output 1, and vice versa. The only way to explain this is with probability amplitudes, and a transition matrix that looks like $$ U=\frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 & i \\ i & 1 \end{array}\right). $$ In quantum mechanics, the square-root of not gate exists. $\begingroup$ It's hard to pin down a why. I'm just using it to show very explicitly that there is a difference. And, more to the point, to show that classical is insufficient to describe what actually happens in the experiment. So, you need a broader formalism. The idea of probability amplitudes gives you that broader formalism. It's like if you restrict to real numbers only, the square root is not well defined, because you need complex numbers to be able to explain it. That's basically what we're doing here. $\endgroup$ – DaftWullie Jun 25 '18 at 8:00 $\begingroup$ I get the transition matrix by saying that for one use, you get 50:50 outputs, so all 4 matrix elements must have a mod-square equal to 1/2. What $2\times 2$ complex matrices $U$ are there, satisfying that constraint, such that $U\cdot U=X$? Any answer will do. $\endgroup$ – DaftWullie Jun 25 '18 at 8:02 $\begingroup$ What do you mean by "fundamental"? Mathematically, it's because we have to describe quantum mechanics using a richer mathematical structure than classical (as proven by this square root of not, not that this gate is particularly special: you can replace the X by any stochastic matrix with a negative eigenvalue). In terms of physics, well physics is just the working theory that describes experimental outcomes (such as square root of not). In terms of some underlying explanation of why the world is the way that it is, who knows? $\endgroup$ – DaftWullie Jun 25 '18 at 8:27 $\begingroup$ You might also be interested in the Kochen-Specker Theorem. It only applies to qutrits and higher, but may help to cover what you want. $\endgroup$ – DaftWullie Jun 25 '18 at 8:31 $\begingroup$ The main problem in this answer is that, although the transition matrix in the case you mentioned i.e. $$U=\frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 & i \\ i & 1 \end{array}\right)$$ doesn't occur in the classical case, it doesn't mean the effects of the transition cannot be replicated for a classical coin. After all, the transition matrix cannot be measured directly. All we can measure is the outcome probabilities! $\endgroup$ – Sanchayan Dutta Jun 26 '18 at 17:42 The analogy between qubits and coin flips is popular but can be misleading. (See, for example, this video: https://www.youtube.com/watch?v=lypnkNm0B4A) A coin spinning in the air and landing on the ground is not truly random, though we may describe it as such. The key point is how you measure it. At any point in time the coin has a definite orientation, though it may be unknown to us. Likewise, qubits have a definite state at any time, which we can describe by a point on the surface of a sphere (the so-called Bloch sphere). Mathematically, a coin's orientation and a qubit's state are equivalent. While in the air, the coin may undergo deterministic and reversible motion (e.g., spinning and falling). Likewise, prior to measurement a qubit may undergo deterministic and reversible transformations (e.g., unitary gate operations on a quantum computer). Measurement represents an irreversible process. For a coin, it is a series of inelastic collisions with the ground, bouncing and spinning until it comes to rest. If we are completely ignorant of the initial conditions of the coin, the two final orientations (heads or tails) will appear equally likely, but this is not always the case. If I drop it oriented "heads up" from a short height, it will land flat with "heads up" with near certainty. But suppose I was standing next to a large magnetic wall and did this. The coin would hit edge-on and would likely land with either heads or tails showing, with equal probability. One could imagine doing this experiment with various initial orientations of the coin and orientations of the magnetic wall (upright, flat, slanted, etc.). You can imagine that the probability of getting heads or tails will be different, depending on the relative orientations of the coin and wall. (In theory it's all completely deterministic, but in practice we never know the initial conditions that precisely.) Measurements of qubits are quite similar. I can prepare a qubit in the state $\frac{1}{\sqrt{2}}[|0\rangle + |1\rangle]$, measure it in the 0/1 basis $\{|0\rangle, |1\rangle\}$, and get either $|0\rangle$ or $|1\rangle$ with equal probability. If, however, I measure in the +/- basis $\{|+\rangle, |-\rangle\}$ (analogous to using a magnetic wall), I get $|+\rangle$ with near certainty. (I say "near certainty" because, well, nothing in the real world is perfect.) Here, $|\pm\rangle = \frac{1}{\sqrt{2}}[|0\rangle\pm|1\rangle$ are the +/- basis states. For polarized photons, for example, this could be done used polarization filters rotated $45^\circ$. The difference between preparing the state $\frac{1}{\sqrt{2}}[|0\rangle + |1\rangle]$ and the state $\frac{1}{\sqrt{2}}[|0\rangle - |1\rangle]$ is the difference between preparing a vertically oriented coin with either heads or tails facing away from the wall. (A good picture would really help here.) We can tell which of the two states is prepared based on the outcome of a suitably chosen measurement, which in this case would be a +/- basis (or magnetic wall) measurement. Jay Gambetta mentions a unitary matrix that is used to represent a Hadamard gate. It corresponds to rotating a coin by $90^\circ$, so a coin that's initially heads up becomes vertically oriented with, say, heads facing away from the wall. If the wall is magnetic and you release the coin, it will stick to it with heads up. If, instead, you started with a coin that's tails up and applied the same rotation, it would be vertical with tails facing away from the wall. If you release it (and the wall is still magnetic), you get tails. On the other hand, if the wall is not magnetic and you drop it, it lands heads or tails with equal probability. Using a "floor" measurement doesn't distinguish between the two vertical orientations, but using a "wall" measurement does. It's not so much whether things are predictable or not, it's the type of measurement you do that distinguishes one quantum state (or coin orientation) from another. This is the whole of it. The only remaining mystery is that the outcome of the coin measurement is considered to be, in theory, completely deterministic, while that of the qubit is considered to be, except in special cases, "intrinsically random." But that's another discussion... Brian R. La CourBrian R. La Cour $\begingroup$ Could you also add an explanation for Jay Gambetta's approach using transition matrix (which he/she apparently justifies using quantum interference)? $\endgroup$ – Sanchayan Dutta Jun 24 '18 at 15:32 $\begingroup$ And to summarize, your main point is that while for a coin if we know the initial conditions sufficiently precisely, then the outcome of a measurement is completely predictable. But for a qubit, simply knowing the initial state sufficiently precisely isn't sufficient to predict the outcome of a measurement (which is essentially what the Copenhagen interpretation says). Yes? $\endgroup$ – Sanchayan Dutta Jun 24 '18 at 15:41 $\begingroup$ @Blue, please see my recent edits for an answer to your question. $\endgroup$ – Brian R. La Cour Jun 24 '18 at 17:04 You have already mentioned the practical differences, such as qubit entanglement, and the negative signs (or more general "phases"). The fundamental reason for this is that allowed quantum states are solutions of the Schrödinger equation, which is a linear differential equation. The sum of solutions to a linear differential equation is always also a solution to that differential equation [1],[2],[3]. Since "solution" to differential equation is synonymous with "allowed quantum state" or "allowed wavefunction", any sum of allowed states is also allowed (ie. superpositions like Bell states are allowed). That is the fundamental reason why quantum mechanical bits (qubits) can exist in superpositions. In fact, not just any sum, but any linear combination of states is an allowed state because the differential equation is linear. This means we can even add constants (phases of -1 or +1 or $e^{i\theta}$) and still have allowed states. Bits that follow the rules of quantum physics, for example, the Schrödinger equation, can physically exist in superpositions and with phases, due to linearity (review vector spaces if this is not clear). Classical physics does not give any mechanism for a system to be in more than one state at the same time. $\begingroup$ @Blue: I saw you had a conversation with someone where that person mentioned the need for "boundary conditions". It is not really true, qubits can exist in superposition and with phases because of linearity of the equation describing them. I have given 3 links which prove this fact in many ways. $\endgroup$ – user1271772 Jun 24 '18 at 18:01 $\begingroup$ Boundary conditions are what ensure that the states are discrete. The Schrodinger equation by itself cannot posit that. Moreover, the superposition that you speak of is certainly not something intrinsic to quantum mechanics. For instance, a classical system can be in a harmonic motion which is a superposition of two individual harmonic motions (basis motions), satisfying a certain ODE. $\endgroup$ – Sanchayan Dutta Jun 24 '18 at 20:23 $\begingroup$ "Classical physics does not give any mechanism for a system to be in more than one state at the same time." <--- being in more than one state at the same instant $\neq$ being in a superposition of two basis states. $\endgroup$ – Sanchayan Dutta Jun 24 '18 at 20:25 Not the answer you're looking for? Browse other questions tagged physical-qubit quantum-state or ask your own question. What is a qubit? What is the difference between superpositions and mixed states? What are the P(0) and P(1) probabilities for the T transformation in quantum computing? How to translate matrix back into Dirac notation? Optimal strategy to a quantum state game State of a system after the second qubit of a Bell state sent through a bit flip error channel What happens if I measure only the first qubit of a Bell state? Help in understanding an exercise on observable / measurement What's the difference between observing in a given direction and operating in that same direction? Probability of measuring the first qubit in the state $\frac{1}{\sqrt 2}(|0⟩+|1⟩)$ in a two-qubit state
CommonCrawl
LOGO: Logics for Ontologies Lead Research Organisation: University of Oxford Department Name: Computer Science Ontologies help both humans and computer applications to communicate by providing a vocabulary of terms together with formal and computerprocessable descriptions of their meanings and the relationships between them. They play a major role in the next-generation World Wide Web (known as the Semantic Web), where they are used to describe the content of Web resources, with the aim of both Improving search for human users and making it easier for computer programs to exploit the vast range of information that is available on the Web. Ontologies are also widely used to define specialised vocabularies for use in medicine, biology and other scientific disciplines.Ontologies are usually developed by human experts, but even for experts the job of defining all the relevant terms is a difficult and time consuming one. It is therefore important to provide intelligent tools that support ontology designers. For this reason, many ontology languages, including OWL (the standard language used for Semantic Web ontologies), are based on logics, This provides a formal specification of the meaning of the language and allows tools to use automated reasoning systems, e.g., to check that Interactions between descriptions do not lead to logical contradictions. Reasoning systems are also useful when ontologies are deployed in applications, where they could be used, e.g., to determine which Web pages match a search request that uses terms defined In an ontology.The central role of ontologies in the above mentioned applications brings with it, however, requirements for expressive power and reasoning support which are beyond the capabilities of existing ontology languages and reasoning systems. For example, OWL cannot express the fact that the brother of a persons father is also their uncle, and even for OWL, no practical reasoning system is yet available. Moreover, existing reasoning systems often have difficulties dealing with the very large ontologies that are needed in many realistic applications. The research described in this proposal aims at bridging this gulf between requirements and capabilities; its ultimate goal is the development of logics and reasoning techniques that that will form the foundations of the next generation of ontology languages and reasoning systems.The research programme will be made up of three complementary strands. The first strand will focus on existing ontology languages, and the logics on which they are based. The aim will be to devise principaled extensions of these ontology languages that meet expressive requirements that have been identified in application areas such as medicine and the Semantic Web.The second strand will focus on implementation techniques for existing ontology languages and for extended languages developed in the first strand. The aim will be to develop highly optimised reasoning systems capable of supporting both the design and deployment of ontologies in large scale applications.The third strand will focus on very expressive ontology languages. These languages are based on logics where it is known to be impossible to build a reasoning system that can solve any problem, e.g., one that is guaranteed to detect all possible contradictions. The aim is to develop reasoning systems that will still be able to efficiently solve the vast majority of problems derived from the use of ontologies in applications.Finally, in order to ensure that the logics, algorithms and reasoning systems being developed really do meet application requirements, they will be tested and evaluated in cooperation with ontology designers and developers of ontology based applications. EP/C543319/2 Ian Horrocks Info. & commun. Technol. (80%) Tools, technologies & methods (20%) Bioinformatics (20%) Fundamentals of Computing (16%) Information & Knowledge Mgmt (64%) University of Oxford, United Kingdom (Lead Research Organisation) BAE Systems (Collaboration) Samsung (Collaboration) World Wide Web Consortium (Collaboration) Ian Horrocks (Principal Investigator) B Motik (2008) Individual Reuse in Description Logic Reasoning B Motik (2008) Representing Structured Objects using Description Graphs Glimm B (2010) Automated Reasoning Glimm B (2010) The Semantic Web - ISWC 2010 Glimm B. (2008) Conjunctive query answering for the description logic SHIQ in Journal of Artificial Intelligence Research Grau B (2008) OWL 2: The next step for OWL in Journal of Web Semantics Grau B.C. (2008) Modular reuse of ontologies: Theory and practice in Journal of Artificial Intelligence Research Horrocks I (2008) Ontologies and the semantic web in Communications of the ACM Jiménez Ruiz E (2011) Supporting concurrent ontology development: Framework, algorithms and tool in Data & Knowledge Engineering Jiménez-Ruiz E (2009) The Semantic Web: Research and Applications Jiménez-Ruiz E (2011) Logic-based assessment of the compatibility of UMLS ontology sources. in Journal of biomedical semantics Jiménez-Ruiz E. (2009) Building ontologies collaboratively using contentCVS in CEUR Workshop Proceedings Jiménez-Ruiz E. (2009) Towards a logic-based assessment of the compatibility of UMLS sources in CEUR Workshop Proceedings Kazakov Y (2008) A Resolution-Based Decision Procedure for $\boldsymbol{\mathcal{SHOIQ}}$ in Journal of Automated Reasoning Magka D (2011) Tractable Extensions of the Description Logic ${\mathcal{EL}}$ with Numerical Datatypes in Journal of Automated Reasoning Magka D (2010) Automated Reasoning Motik B (2009) Representing ontologies using description logics, description graphs, and rules in Artificial Intelligence Motik B (2009) Hypertableau Reasoning for Description Logics in Journal of Artificial Intelligence Research Motik B (2009) Bridging the gap between OWL and relational databases in Journal of Web Semantics Perez-Urbina H. (2009) Practical aspects of query rewriting for OWL 2 in CEUR Workshop Proceedings Pérez-Urbina H (2009) The Semantic Web - ISWC 2009 Pérez-Urbina H (2008) Semantics in Data and Knowledge Bases Pérez-Urbina H (2010) Tractable query answering and rewriting under description logic constraints in Journal of Applied Logic Pérez-Urbina H. (2009) A comparison of query rewriting techniques for DL-lite in CEUR Workshop Proceedings Robert Shearer (2009) Exploiting Partial Information in Taxonomy Construction Sattler U (2008) Modular Reuse of Ontologies: Theory and Practice in Journal of Artificial Intelligence Research Sattler U (2008) Conjunctive Query Answering for the Description Logic SHIQ in Journal of Artificial Intelligence Research Shearer R. (2009) HermiT: A highly-eficient OWL reasoner in CEUR Workshop Proceedings Yevgeny Kazakov (Author) (2008) RIQ and SROIQ Are Harder than SHOIQ Award Value EP/C543319/1 01/03/2006 31/08/2007 £369,194 EP/C543319/2 Transfer EP/C543319/1 01/09/2007 28/02/2011 £262,852 Software and Technical Products Description The research carried out in this project exerted a considerable influence on the development of the semantic web in general and of ontology languages in particular, where I played a key role in the development of a series of description logic based ontology languages. I recently chaired the W3C working group that developed OWL 2, a successor to the W3C's OWL ontology language standard. OWL 2 is based almost entirely on my research into Description Logics, decision procedures, reasoning systems and the application of all of the above in ontology languages and tools. OWL 2 also extends OWL with tractable profiles based on key works within the DL community, notably work on the DL-Lite and EL families of tractable DLs. Thus I have succeeded in disseminating a range of important research results from across the DL community and greatly increasing their influence on practical applications. Regarding basic research in DLs, numerous important results have been achieved during the course of the project. Most important of these was my work on developing decision procedures for SHOIQ and SROIQ. These logics form the core of OWL 2, and the combination of nominals, inverse and counting (the OIQ part) makes the design of a decision procedure particularly tricky. In fact the decidability of this logic was an important open problem for several years. I showed that it is decidable, but that it has a much more complex model structure in which the non-tree part of the model is not restricted to the ABox. I devised a new technique for constructing such models by introducing new ABox individuals as needed. Regarding optimisation techniques and reasoning systems, working with Boris Motik and others I developed a new Hypertableau reasoning technique for SROIQ, implemented it in the HermiT system, and devised a whole range of new optimisation techniques. HermiT is now the standard reasoner distributed with Stanford's Protege ontology development environment, and as such is being used by thousands of ontology developers around the world. The techniques developed in HermiT and in my earlier FaCT and FaCT++ systems are also the basis for all the sound and complete OWL reasoning systems known to me. In addition to the above, I have worked on a range of other problems in the KR area. With Cuenca Grau, Kazakov and Sattler I developed theory and practical techniques for modularising ontologies, something for which only ad hoc techniques had previously been available. With Glimm and Lutz I developed query answering techniques for expressive DLs solving, e.g., the open problem as to the decidability of conjunctive query answering for SHIQ. With Grosof, Patel- Schneider and others I developed a variety of techniques for integrating rules with DLs and DL based ontology languages, with our papers on DLP and on SWRL still being amongst the most highly cited in this area. With Cuenca Grau and Stoilos I have recently been working on the systematic evaluation of incomplete reasoning techniques, and our paper on this topic won a distinguished paper award at AAAI last year. The influence of this work can be gauged from citation counts and from the number of invited keynotes that I have given in recent years. Regarding the former, according to google scholar, my h-index is 74, and my work has been cited more than 26,000 times. Regarding the latter, I have given more than 40 keynote talks and invited seminars, and in 2010 alone I gave keynotes at the DL, KR, ICDT/EDBT, ECAI and KSEM conferences, as well as giving seminars at Oracle Inc, and at Stanford, Nanjing and Zhejiang universities. Exploitation Route The OWL standard(s) are widely used in industry and research (both academic and commercial). Sectors Digital/Communication/Information Technologies (including Software),Healthcare,Culture, Heritage, Museums and Collections URL http://www.cs.ox.ac.uk/ian.horrocks/Projects/logo.html Description I chaired the W3C OWL working group that developed the OWL 2 ontology language standard. The standard is based on ontology language and reasoning research carried out in the project. OWL has had an enormous impact, and has facilitated the widespread application of ontology-based technologies in numerous sectors, including healthcare, finance, energy and manufacturing. For example, even during the course of the project I was collaborating with end users at BAE Systems, EDF and Samsung. Since then I have worked with numerous other companies, and the use of OWL by now pervasive. Sector Aerospace, Defence and Marine,Digital/Communication/Information Technologies (including Software),Energy,Financial Services, and Management Consultancy,Healthcare,Government, Democracy and Justice,Manufacturing, including Industrial Biotechology,Culture, Heritage, Museums and Collections,Pharmaceuticals and Medical Biotechnology Impact Types Economic Description EPSRC Condor Funding ID EP/G02085X/1 Organisation Engineering and Physical Sciences Research Council (EPSRC) Description EPSRC ExODA Funding ID EP/H051511/1 Description EPSRC HermiT Funding ID EP/F065841/1 Description EPSRC RInO Funding ID EP/E03781X/1 Description FP7 Seals Funding ID FP7 - 238975 Description BAE Systems Logo Organisation BAE Systems Sector Academic/University PI Contribution Assisted with research into ontology based data integration. Collaborator Contribution Use case, implementation and evaluation. Impact . Description Chair of W3C OWL Working Group Organisation World Wide Web Consortium Sector Charity/Non Profit PI Contribution Horrocks was chair of W3C OWL Working Group, and led the development of the OWL 2 standard. Collaborator Contribution Infrastructure for working group Impact W3C OWL 2 ontology language standard -- see https://www.w3.org/TR/2012/REC-owl2-overview-20121211/ Description Samsung Organisation Samsung Department Samsung Advanced Institute of Technology Country Korea, Republic of Sector Private PI Contribution Helping them to develop an ontology reasoning system to run on their mobile platforms. Collaborator Contribution Use case, testing and evaluation. Title HermiT Description A reasoning system for OWL ontologies Type Of Technology Software Year Produced 2008 Open Source License? Yes Impact HermiT is the most widely used OWL reasoner, and the only one to fully support the OWL 2 standard. It is used in both research and industry, for example in EDF's energy management adviser, which is used by hundreds of thousands of EDF customers in France. URL http://hermit-reasoner.com/ Description DBOnto kick-off workshop Form Of Engagement Activity Participation in an activity, workshop or similar Part Of Official Scheme? No Geographic Reach International Primary Audience Industry/Business Results and Impact Workshop for industry partners presenting Information Systems Group research. Participants included Oracle, Siemens, IBM, FluidOperations, B2i healthcare, Roke, Facebook and the universities of Stanford, Rome (La Sapienza), Politecnico di Milan and FZI. Year(s) Of Engagement Activity 2014 URL http://dbonto.cs.ox.ac.uk/kickoff.html
CommonCrawl
Self-adaptation by coordination-targeted reconfigurations Nuno Oliveira1 & Luís S Barbosa1 Journal of Software Engineering Research and Development volume 3, Article number: 6 (2015) Cite this article A software system is self-adaptive when it is able to dynamically and autonomously respond to changes detected either in its internal components or in its deployment environment. This response is expected to ensure the continuous availability of the system by maintaining its functional and non-functional requirements. Since these systems are usually distributed, coordination middleware (typically a centralised architectural entity) plays a definitive role in establishing the system goals. For these reasons, adaptations may be triggered at coordination level, issuing reconfigurations to such a coordination entity. However, predicting when exactly reconfigurations are needed, and if they will lead the system into a non disruptive configuration, is still an issue at this level. This paper builds on a framework for formal verification of architectural requirements, either from a qualitative or quantitative (probabilistic) point of view, which will leverage analysis and adaptation prediction. In order to address the mentioned difficulties, it is discussed both a model that lays down reconfiguration strategies, planned at design time, and a process that actively uses such a model to trigger coordination-targeted reconfigurations at run time. Moreover, a cloud-based architecture for the implementation of this strategy is proposed, as an attempt to deliver adaptation as a service. A case study is presented that assesses the suitability of the approach for real-world software systems. We highlight the use of formal models to represent the coordination layer and necessary reconfigurations of a software system, and also to predict the need for (and to trigger) adaptations. Emergency call-centers facing unexpected peaks of activity, surveillance systems whose CCTV devices have to operate under changeable environment conditions, or applications for mobile devices constrained by limited battery autonomy, are examples of systems which have somehow to adapt to change along a normal operating cycle. The expression self-adaptive qualifies a behaviour which has to respond at run time to contextual changes, detected either internally or externally, in order to keep meeting its own functional requirements and general service level agreement (SLA), ensuring the relevant quality of service (QoS) attributes (Garlan et al. 2009; Oreizy et al. 1999). This entails the need for some degree of introspection. Actually, such systems should be able to keep track of their internal interconnection structures, attributes, execution environment, requirements and reference performance levels; but above all, to observe and detect changes in these elements. Such observations, suitably processed (e.g., by comparison to reference levels assigned to measurable variables) will be responsible for triggering adaptations. This process, which spans from acquiring information to check for relevant changes, to actually enacting adaptations, is known as the control or feedback loop model in the literature associated to control theory, autonomic computing, robotics or artificial intelligence (Gat 1998; Nilsson 1980). Its implementation involves four components responsible for monitoring, analysing, planning and executing changes, as defined in the MAPE(-K) reference model (IBM Corp 2004; Kephart and Chess 2003). In self-adaptive software this model is realised by monitoring the environment and probing the system's attributes; analysing the data collected to infer situations in need for adaptation; deciding the adaptation strategy; and finally, enacting reconfigurations to enforce the system's adaptation into acceptable (non disruptive) configurations (Brun et al. 2009; Dobson et al. 2006; Villegas Machado et al. 2011). Self-adaptive systems are often distributed, component-based, with highly demanding requirements. Coordination middleware, typically a centralised architectural entity, defines the interaction between such components. This is responsible for establishing the overall system goals by covering its requirements (Arbab 2004). For this reason, the coordination layer of these systems plays a fundamental role in the adaptation process. Concretely, coordination models (e.g., Reo (Arbab 2004), BIP (Basu et al. 2011), among others) are operative in the generation of introspective/reflective abstractions of the whole system from its coordination layer. This highlights the importance of coordination-targeted reconfigurations. But deciding and applying reconfigurations is not an easy task. Mainly, this is due to the unpredictable, evolutive nature of the deployment context, which precludes knowing with exactitude when a reconfiguration has to be applied, and predicting its outcome. Reconfigurations can be planned in advance provided that a number of relevant context attributes are identified and translated into measurable variables. Suitable ranges of values for these attributes may help to plan (at design time) configurations that will, most likely, drive the system into stable states meeting specific sets of conditions. Nevertheless, assumptions made at design time may not apply directly after deployment. On the other hand, unpredictable contexts may trigger reconfigurations that were not intended to occur because, for example, they may violate some key functional properties of the original design. Triggering reconfigurations must, therefore, take into account, not only the expected QoS levels, but also functional properties which are identified as design invariants. We have recently proposed a framework for modelling coordination-targeted reconfigurations and verifying their properties in the presence of contextual changes (Oliveira and Barbosa 2013a, b). This work is based on a generic coordination model encompassing a graph whose edges are regaded as connectors specifications. The properties of interest are relative to behaviour, classifying reconfigurations w.r.t behavioural changes provoked to the coordination model; or to structure, namely to the topology of the underlying coordination model. Structural properties are expressed in a specific variant of hybrid logic (Blackburn 2000), interpreted over the graph representing the interconnection network. We have also introduced a quantitative behavioural model (Oliveira et al. 2014) for this coordination style based on Markov chains (Hermanns 2002). This opened the possibility to assess and compare reconfigurations along a quantitative (actually, a probabilistic) reasoning dimension. In broad terms, this paper focuses on the adaptability quality attribute for software architectures, often regarded as a major one in architectural design (Ciraci and van den Broek 2006; Losavio et al. 2003). In particular, the paper proposes a self-adaptation strategy, following the MAPE reference model. The novelty is the introduction of a model of coordination-targeted reconfiguration strategies, planned at design time. This model is actively used to decide and trigger adaptations at run time. The model's key ingredient is a transition system whose states are the configurations originally envisaged for the architecture, and edges represent reconfigurations, i.e., paths from a configuration to another. The work reported here extends the original SBCARS'2014 paper (Oliveira and Barbosa 2014) as follows: a state transfer strategy for dynamic reconfigurations is formalised, the self-adaptation strategy is detailed, an architecture to deliver adaptation as a cloud-based service is proposed. The envisaged adaptation strategy is discussed in Section 4. Before that, in Section 2, the underlying framework for reconfigurations is introduced; and in Section 3 this is further extended to cope with dynamic reconfigurations, notably with the consistent state transfer problem. A detailed example is discussed in Section 5. Section 6 proposes a refactoring of the adaptation strategy in order to deliver it as a cloud-based service for adaptation. Section 7 revises relevant related work; and finally, Section 8 concludes and proposes topics for future work. A framework for architectural reconfiguration A software architecture is often represented as a graph whose vertices are labelled by components and interconnected by adapters, wrappers, connectors or other forms of glueware depicted in the edges (Wermelinger 1999). In this setting, architectural reconfigurations mainly target components and connectors by adding, removing or substituting them as blocks (Hnětynka and Plášil 2006). However, in typical service-oriented systems, the coordination layer becomes prominent. Therefore, our focus will be the reconfiguration of connectors and the communication protocols they implement. As proposed in (Oliveira and Barbosa 2013a, b), software architectures are regarded as graphs of communication channels, where nodes are interaction points and edges are labelled with an identifier and a type which encodes a concrete coordination policy. These graphs are called coordination patterns and concretely model service orchestration. They are abstract representations of software connectors and therefore independent of any concrete coordination model. Each coordination pattern is characterised by its input and output ports and the internal interaction of channels, which provide them with a specific behaviour. The set of all coordination patterns is denoted by \(\mathcal {P}\). Fig. 1a depicts an example of a coordination pattern which allows for a sequential interaction on output ports a and b after a stimulus is received on input port i. Fig. 1b, in turn, depicts a coordination pattern that ensures a parallel interaction on output ports a and b, after being stimulated on input port i. For concreteness, the Reo framework (Arbab 2004) has been adopted to type channels and to represent them graphically. Coordination patterns example. The Sequencer (a) and the ParallelSplit (b). White circles denote interface ports while black circles stand for internal nodes. In this context, a reconfiguration is defined as any change made to the structure of a coordination pattern. Such changes are guided by the application of primitive operations that manipulate the pattern's basic elements. An algebra of reconfigurations was defined based on the following primitive reconfiguration operations: const π , par π , join N , split n and remove c , where indexes represent parameters: π is a coordination pattern, N is a set of nodes, n is a node and c is a channel identifier. The set of primitive operations is denoted by Prim. These operations are applied sequentially to a coordination pattern. An intuitive description of their behaviour is as follows: const π substitutes π i by π; par π sets π in parallel with π i (which are assumed to be completely disjoin), but does not establish any connections between the two; join N connects all nodes in N (that exist in π i ) into a single one; split n , as its name suggests, performs the inverse operation; and, finally, remove c removes the channel identified by c from π i . These primitives may be composed sequentially to yield complex and yet reusable constructions referred to as reconfiguration patterns. For instance, the i m p l o d e C pattern, when applied to a coordination pattern π i , removes all channels in set C from π i (applying the remove primitive recursively over C) and then reconnects (with join) the resulting ports. Fig. 1b shows the result of applying \(\mathsf {{implode}_{\{s_{3}\}}}\) to the sequencer pattern. The interaction at ports a and b becomes parallel, instead of sequential. The reader is referred to (Oliveira and Barbosa 2013a, b) for a detailed description to this algebra of reconfigurations. The set of all reconfigurations is denoted by \(\mathcal {R}\). Often it becomes necessary to rule out reconfigurations that lead to system states which fail to preserve some key functional requirements of a system measured either in terms of behavioural or structural properties. The ability of inspecting these properties is, therefore, mandatory when dealing with adaptable architectures. Next we discuss three perspectives on reasoning about reconfigurations: behavioural, structural and quantitative. 2.2.1 The behavioural perspective In order to reason about reconfigurations from a behavioural perspective it is necessary to fix a concrete semantic model for coordination patterns. This must encompass suitable notions of observational equivalence and refinement (often encoded as bisimulation and simulation relations), which are required to compare behaviours, typically before and after reconfiguration processes. In this framework, reconfigurations are classified as (i)unobtrusive, when the original behaviour is completely preserved; (i i)expansive, when new behaviour is added, but still preserving the original; (i i i)contractive, when part of the original behaviour is removed; and (i v)disruptive, when the original behaviour or part of it is not preserved. In practice these classifications are made w.r.t. a specific coordination pattern and the underlying semantic model. As an example, the reconfiguration \(\mathsf {{implode}_{\{s_{3}\}}}\) is disruptive w.r.t. the sequencer coordination pattern (c.f., Fig. 1) and taking Reo automata (Bonsangue et al. 2012) as a concrete semantic model. 2.2.2 The structural perspective For structural reasoning, on the other hand, the model is the (underlying graph of the) coordination pattern itself. This is taken as the (Kripke) structure (Blackburn et al. 2001) for interpretation of a propositional hybrid logic (Brauner 2010) in which structural properties are expressed. A typical example of a structural property is the requirement that a synchronous channel has to be followed by a channel with some buffering capacity. Sentences in theis hybrid logic are given by the following grammar: $$\phi\; ::= \; i \: \mid\: \neg \phi \: \mid\: \phi_{1} \land \phi_{2}\: \mid\: [{K}]{\phi} \: \mid\: [\![{K}]\!]{\phi} \: \mid \: @_{i} \phi $$ where i is a nominal (a propositional symbol that is true at exactly one node of the coordination pattern); constants true, false and the boolean connectives are defined as usual; K is a set of channel types (abbreviations ' −' and ' −t' refer to the whole set of channel types and that set but t, respectively). Modalities [K]ϕ and ⟦ K ⟧ ϕ quantify universally over the edges of the coordination pattern and express properties of the outgoing (respectively, incoming) connections from (respectively, to) the node at which the formula is evaluated. Their duals, 〈K〉=¬[K]¬ and 〈 〈K〉 〉=¬⟦ K ⟧¬, define existential quantification over the edges of the pattern. The satisfaction operator @ i redirects the evaluation of a formula to the context of a node named by nominal i. As mentioned above, this logic is able to express rather sophisticated (structural) requirements. For example, requirement "communication through the input port is made asynchronous" is represented by @ i 〈 〈−〉 〉false→@ i [fifo]true. Here, i is a nominal referring the node i in the patterns of Fig. 1. Indeed the formula says that if the node identified by i is an input port (i.e., it has no incoming connections, formally @ i 〈 〈−〉 〉false) then all outgoing channels are of type fifo, where fifo represents a buffered (asynchronous) channel. 2.2.3 The quantitative perspective Finally, to introduce quantitative reasoning into the framework the Kripke structure derived from the underlying coordination pattern is analysed from a stochastic point of view. As a general strategy this entails the need for a stochastic model for software connectors. In (Oliveira et al. 2014) we have proposed a compositional, quantitative semantic model for Reo like connectors, based on interactive Markov chains (IMC) (Hermanns 2002; Hermanns and Katoen 2010), from which basic features (e.g., compositionality and the existence of suitable notions of bisimilarity) are inherited. Stochastic coordination patterns and their reconfigurations can thus be analysed through well-known and reliable tools for stochastic processes, namely IMCA (Guck et al. 2012), CADP (Garavel et al. 2012) and PRISM (Kwiatkowska et al. 2010). It is worth noting that a stochastic semantics can be adapted both for behavioural (regarding connectors as stochastic devices, as in (Moon et al. 2014; Oliveira et al. 2014)) and structural reasoning (regarding the coordination pattern itself as a weighted transition system). This reduces the number of model-to-model transformations, languages and tools for expressing and verifying architectural requirements, and consequently, the number of assets used in analysis. Henceforth, the set of analysable assets will be denoted by \(\mathcal {A}\). Ensuring consistent dynamic reconfigurations The application of reconfigurations to the architecture of a software system at runtime is a major and non-trivial research problem. Mainly so because reconfigurations have to be transparently applied, while the exact system execution state in which a reconfiguration is required (henceforth referred to as the interrupted state), is hardly known a priori. The qualifier transparent above means that the system has to change its internal configuration without service disruption during and after a reconfiguration process. This entails the need for (i) the atomic application of reconfigurations with roll-back mechanisms triggered when the application fails; (i i) resuming the execution of the system in a state that is consistent (as much as possible) with the interrupted state; and (i i i) keeping the system in line with its functional and non-functional requirements. The framework revisited in Section 2 mitigates some of these problems. Requirement (i), for example, is met because primitive reconfigurations are atomic low-level operations amenable to be rolled-back, provided the existence of associated reconfiguration monitoring mechanisms. The same happens in case (i i i) due to the methods provided for analysis (i.e., verification of structural, behavioural and probabilistic properties) and comparison of reconfigurations, which can be exploited from a static perspective. But, certainly, the framework does not support requirement (i i), since it does not deal explicitly with dynamic application of reconfigurations. From a static prespective, the interrupted state is either ignored or always assumed to be the initial one. After a reconfiguration, the system is again in its initial state. For the overall analysis of the system properties this approach is reasonable. Consequently, ensuring system consistency from a static perspective of reconfigurations is not a challenge. It must be taken seriously, though, when dynamism enters the equation. The unpredictable evolution of the (relevant properties of the) deployment environment may raise the need for reconfiguration at any moment in time, regardless of the overall system state. In the sequel we propose a simple approach to consistently transfer state between configurations. This builds on an underlying automata-based semantic model of the coordination pattern, enriched with symbolic state annotations. The enacting of reconfigurations is assumed to occur when the system enters a quiescent state, as usual in practice (Kramer and Magee 1990). A symbolic approach to state transfer As mentioned above, reconfigurations in this framework target the coordination layer of a system, modelled through coordination patterns. These patterns exhibit behaviours in some specific semantic model, typically automata-based, defined by the software architect. However, in order to define a strategy for consistent state transfer, it is necessary that these automata are enriched with a symbolic representation of state data. In the sequel we continue considering Reo for concretely typing the edges of a coordination pattern, and we take port automata (Krause 2011) (for its simplicity) as the underlying semantic model. Symbolic annotations are generated by the following grammar \(\mathcal {S}\): $$ s\, ::=\, \varsigma ~|~ \neg s ~|~ s\land s $$ where ς is an atomic symbolic state. An atomic symbolic state refers to the identifier of an edge in the coordination pattern to which data is assigned. In the concrete case of Reo, we use the identifiers of the coordination pattern edges typed with a fifo channel, as this is the only stateful channel considered in Reo. Notice that, although the notation above is borrowed from Logic, connectives ¬ and ∧ have a specific meaning here. Thus, ¬ς means that the internal state ς of the pattern has no data assigned (and therefore can be omitted from the formula), and ς 1∧ς 2 means that both states have data in the context of the pattern. Moreover, it is asserted that ¬ς 1∧ς 1=¬ς 1 ¬(ς 1∧ς 2)=¬ς 1∧¬ς 2. Additionally, notation ⊥ π is used to express that there is no data in any internal state of pattern π and ⊤ π for its dual. The index π can be omitted when clear from the context. Definition 1 (Symbolic Port Automata). A symbolic port automaton \(\mathcal {A}_{\varsigma }\) is an automaton (Q,P,→,q 0), where \(Q\subseteq \mathcal {S}\) is a set of symbolic states, P is a set of ports, q 0∈Q is the initial state, and →⊆Q×2P×Q is a transition relation. A transition (q,{a,b,...},p), written as \(q \xrightarrow {\{a,b,\ldots \}} p\), means that the system evolves from state q to state p when ports a,b,... can interact synchronously. Notation ⟦ π ⟧ ς is used, henceforth, to refer to the symbolic port automaton of coordination pattern π. As an example, consider the ParallelSplit coordination pattern in Fig. 1b. The state space of the underlying symbolic port automaton is Q={⊥,s 1}, or optionally Q={¬s 1,s 1}. On the other hand, the state space of the symbolic port automaton for the Sequencer coordination pattern (Fig. 1a), would be Q={⊥,s 1,s 3,⊤} or optionally Q={¬s 1∧¬s 3,s 1∧¬s 3,¬s 1∧s 3,s 1∧s 3}. The corresponding port automata for the Sequencer and the ParallelSplit patterns are depicted in Fig. 2 (top and bottom, respectively). On the other hand, auxiliary operation IS(⟦ π ⟧ ς ) returns the initial state of the symbolic port automaton of coordination pattern π. Whenever π is the empty coordination pattern, IS returns the general symbolic state ⊥ π . The state transfer operation is defined as follows, State transfer upon dynamic reconfiguration. Port automata for the Sequencer (top) and ParallelSplit (bottom) coordination patterns. Definition 2 (State Transfer). Let π be a coordination pattern, \(S_{r} \in \mathcal {S}\) the symbolic interrupted state for reconfiguration r={r 0,r 1,…,r n } (where each r i is a reconfiguration primitive). The state transfer operation for applying r to π in state S r , denoted by \(\leadsto _{\pi,r,S_{r}}\), is inductively defined as \(\leadsto _{\pi,r_{0},S_{r}} \land \leadsto _{\pi,\{r_{1},\ldots,r_{n}\},S_{r}}\), where for each r i ∈P r i m: $$\leadsto_{\pi,r_{i}, S_{r}} = \left\{ \begin{array}{l l} \mathsf{IS}([\![{\pi^{\prime}}]\!]_{\varsigma}) & \ \text{if} \ r_{i} = \mathsf{const}_{\pi^{\prime}}\\ S_{\mathsf{par}_{\pi^{\prime}}} \land \mathsf{IS}([\![{\pi^{\prime}}]\!]_{\varsigma}) & \\ S_{\mathsf{remove}_{c}} \land \neg c & \ \text{if}\ \mathfrak{T}^{c}_{\pi} \in \{\mathsf{fifo}\} \\ S_{r_{i}} & \ \text{otherwise} \end{array} \right. $$ and \(\mathfrak {T}^{c}_{\pi }\) retrieves the type of the channel c in the coordination pattern π. This can be generalised as follows. Assume a reconfiguration r; a (possibly empty) coordination pattern π in formed either by (i) all patterns introduced by par c primitives in r or (i i) the pattern introduced by the last const c primitive and all patterns introduced by the sequent par c primitives in r; a coordination pattern π out as the result of applying r to π; and finally R(π,π out ) as the set of stateful channel names removed during the reconfiguration. Then, $$S_{r} \land \mathsf{IS([\![{\pi_{in}}]\!]_{\varsigma})} \land \neg \bigwedge R(\pi,\pi_{out}) $$ is the generalisation of \(\leadsto _{\pi,r,S_{r}}\). The state obtained from this operation is referred to as the resuming state. An application example Consider the Sequencer coordination pattern of Fig. 1a as the model for the coordination layer of a running system. In certain situations (e.g., when servers are overloaded with user requests) the system was designed to evolve into a parallelised provisioning of its services, therefore adopting a ParallelSplit configuration for its coordination layer. This involves the application of an \(\mathsf {implode}_{\{s_{3}\}}\) reconfiguration to the original pattern. Since the system is running, and the contexts which trigger such a reconfiguration are unpredictable, it is necessary to take the consistency of the system into consideration. This entails the need for the correct transfer of the state to the new configuration. It is assumed (for illustration purposes) that the reconfiguration process does not fail and that the obtained configuration will maintain the invariant properties of the system. Consider the port automaton for the Sequencer coordination pattern as depicted in the first row of Fig. 2. Four replications of the automaton are presented, representing the four possible states (circled with dashes) in which the \(\mathsf {implode}_{\{s_{3}\}}\) reconfiguration can be issued. After reconfiguration, such states must be restored if possible. The resuming states in the context of the ParallelSplit port automaton are depicted as shaded circles. The tables between the automata present values for S r , the state interrupted for application of reconfiguration \(r=\mathsf {implode}_{\{s_{3}\}}\); I, the initial state of the structure added to the pattern; and R, the conjunction of the identifiers of stateful channels (fifo channels in this case) removed from the original pattern. These are the necessary ingredients to apply the general state transfer operation in order to obtain the desired resuming state. Recall that the \(\mathsf {implode}_{\{s_{3}\}}\) operation may be translated into the sequence of primitives \(r = \left \{\mathsf {remove}_{s_{3}}, \mathsf {join}_{\{cd,f\}}\right \}\). Therefore, the only statefull channel removed is exactly s 3, thus \(R = \bigwedge R(\pi, {\pi }_{\textit {out}}) = s_{3}\), and no patterns are added to the original one, thus \(I = \mathsf {IS}([\![ {{\pi }_{\textit {in}}}]\!] _{\varsigma }) = \bot _{{\pi }_{\textit {in}}}\phantom {\dot {i}\!}\). For the latter, since all the patterns added by par {c d,f} are disjoint from the original pattern, then all symbolic states negated in \(\bot _{{\pi }_{\textit {in}}}\phantom {\dot {i}\!}\) are different from the ones in the original pattern. Let us now discuss the four situations depicted in Fig. 2, from left to right, in more detail. In the first situation the reconfiguration is applied when the pattern is in its initial state. In this case such state is ⊥, meaning that no data is assigned to the statefull channels of the pattern. Thus, the resuming state is still ⊥ in the new configuration. In the second situation the reconfiguration is applied when the system is in state s 1. Hence, the resuming state is \(s_{1} \land \bot _{{\pi }_{\textit {in}}} \land \neg s_{3} = s_{1}\phantom {\dot {i}\!}\). There are situations, though, in which it is not possible to find a suitable resuming state on the new configuration. When such is the case, the usual approach is to start the execution of the reconfigured system from its initial state. Our approach is more comprehensive on this aspect: it automatically delivers the state that best approximates the desired one. For instance, in the third situation the resuming state should be s 3. But, since s 3 is removed, the best approximated state in the port automaton of ParallelSplit is the initial \(\bot = s_{3} \land \bot _{{\pi }_{\textit {in}}} \land \neg s_{3}\phantom {\dot {i}\!}\). For the same reason, in the fourth case, the interrupted state can not be resumed as is. However, in this case, the best approximated state is \(s_{1} = s_{1} \land s_{3} \land \bot _{{\pi }_{\textit {in}}} \land \neg s_{3}\phantom {\dot {i}\!}\). Self-adaptation strategy The self-adaptation strategy proposed in the sequel is organised around two main phases. One is offline and concerns the planning of possible reconfigurations by the software architect. The other is online and focuses on the autonomous selection of reconfigurations to adapt a running system as part of a monitoring feedback loop. The offline phase: planning reconfigurations At this phase the architects have a preponderant role in preparing adaptation assets that in the online phase are autonomously used. One of these assets is a faithful model of the system architecture. This is modelled by coordination patterns (as discussed in Section 2) and constitutes the initial specification of the system. It is also in this phase that the system (functional and non-functional) requirements are encoded into verifiable properties targeting behaviour, structure and QoS. The set of all properties over the system and the environment is denoted by \(\mathcal {P}rop\). In fact, this set is divided into four parts containing functional (FUN) and non-functional (QoS) properties, system generic properties (SYS) and environment specific properties (ENV). Upon these properties, the architect defines the adaptation logic as a set of constraints. Definition 3 (Constraint). A constraint is a triple (ϕ,β,υ), where \(\phi \in \mathcal {P}rop\); β is a boolean operator; and \(\upsilon \in \mathbb {R} \cup \mathbb {B}\) is the expected value for the conjugation property-operator. The set of all possible constraints will be denoted by Ξ. Constraints and their utility is further addressed in Section 4.3. The final asset from this phase is concerned with preparing (modelling and analysing) reconfigurations. The architects plan them by taking into account both the system requirements and possible ranges of values for the attributes that characterise its environment. This leads to a set of possible configurations and reconfigurations with a dependency relation between them. Such a dependency relation is captured by a reconfiguration transition system (RTS). Formally, Definition 4 (RTS). A RTS is a tuple (C,→,k i ), where \(C \subset \mathcal {P} \times 2^{\Xi } \times 2^{\mathcal {A}}\) is a set of configuration states, k i ∈C is the initial configuration state and \(\to \subseteq C \times \mathcal {R} \times C\) is the transition relation. A RTS is, in essence, a labelled transition system. Transitions from each state κ are labelled with the reconfigurations that can be applied from there. States represent valid configurations of the deployed systems. Each state is actually composed of a coordination pattern; a set of state-specific constraints, which enable finer decisions (details further in Section 4.3); and a set of necessary assets for the analysis e.g., PRISM specifications and symbolic port automata. Note that these models are computed in this phase in order to avoid their inherent performance overheads, later, at runtime. The online phase: monitor feedback loop The online phase consists of a monitor feedback loop (which springs from traditional approaches (IBM Corp 2004; Kephart and Chess 2003) built upon the reconfiguration framework presented in Section 2. Fig. 3 depicts its main elements. Feedback loop based on a reconfiguration transition system. We refer to this as a feedback loop based on a RTS, because the transition system of reconfigurations is a first-class entity in our approach. Globally, our implementation of a feedback loop requires the following assets: (i) a RTS; (i i) a model of the deployed system; (i i i) a mapper, which maps concrete connections to services to the logical ports of the model; (i v) the instant observations (measures) of the system properties; (v) a pool of candidate configurations (and their analysable assets); (v i) the reconfiguration framework for reasoning about the possible reconfigurations; (v i i) the properties of interest of the system and (v i i i) the services of tools for quantitative analysis of the configuration. Three invariants assert that (a) the current state (i.e., the current configuration) of a RTS always points to the current configuration of the system architecture; (b) the current state of the symbolic port automata (within the current state of the RTS) reflects the current execution state of the system; and (c) the pool of candidate configurations consists of the models obtained from the current state by a single-step transition. In the sequel we detail how the three main components (monitor, planner and executor) work together, resorting to the above mentioned assets, to achieve adaptability. 4.2.1 Monitoring The monitor component aggregates data from the deployment environment and the system itself. Probes are assumed to collect different sort of data, depending on the variables that drive the adaptation. Latency, throughput, bandwidth, number of clients, number of servers or type of connection (e.g., wifi, bluetooth, GSM) are typical variables. The monitor uses the information from the mapper to associate raw data from the system to the model, which is then used as-is by the planner component. Fig. 4 shows a UML sequence diagram which describes the interaction between these elements. Sequence diagram for the Monitor component. 4.2.2 Planning The planner has two components: the analyser and the decider, that work together to plan the most adequate adaptation to the given context. These components rely on the features of the architectural reconfiguration framework (presented in Section 2) for formally verifying the functional and non-functional properties of the architecture. Fig. 5 shows the sequence diagram for such a component. Therein, FPChecker and NFPChecker entities refer, respectively, to interfaces for the suitable functional and non-functional property analysing services. Sequence diagram for the Planner component. In step marked with (1) the decider uses the RTS for picking all the configurations reachable from the current state. This action creates a pool of candidate configurations along with their pre-compiled analysable assets. In step marked with (2), the analyser reduces the pool by discarding configurations that fail to meet the required functional properties. These two steps are performed only once each time an adaptation occurs, or every time the functional properties of the system change. Then, in the periodic loop marked with (3), the analyser incorporates the received managed data into the analysable assets of each configuration in the pool. This is used to check for non-functional properties of the current configuration, taking advantage of suitable quantitative analysis tools. Whenever non-functional properties fail, a reconfiguration is triggered. At this moment, in the loop marked with (4) the decider is responsible for choosing a suitable configuration (and associated reconfiguration operation) from the pool to embody the adaptation step. This choice, which is part of what we call the triggering of a reconfiguration, is based on the results of the (qualitative and quantitative) analyses performed. The executor component receives the reconfiguration selected and applies it to the running system. In particular, it computes the resuming state by resorting to the symbolic port automata of the current configuration, which was derived at design time, and translates it, along with the selected reconfiguration, into an executable reconfiguration. This script is then applied to the system. This is done resorting to a Reconfigurator entity that is associated to the framework presented in Section 2. A Reflector entity, awaits for the system to reach a quiescent state; when such a state is attained, it makes the system reflect the changes by applying the reconfiguration script. Concurrently, a sequence of updates are made: the system model is substituted by the selected configuration; the state of the RTS is updated accordingly, to meet the first feedback loop invariant; and finally, the candidate configurations in the pool are substituted by new candidates, computed in the new system's state by the decider component (c.f., Fig. 5). Figure 6 depicts the sequence diagram for the Executor component, detailing the description above. Sequence diagram for the Executor component. Triggering of reconfigurations Usually, a reconfiguration of a system is enacted whenever a non-functional property fails, violating the SLA contract. However, this vision is not always enough since the company owning the adaptable system may have other objectives besides providing the agreed SLA. For instance, reducing the operational costs of the system or agreeing to new functional requirements may constitute part of these objectives. Actually, in the approach proposed here, the adaptation triggering is lead by a number of constraints reflecting both the objectives of the company w.r.t. the system and, consequently, the adaptation logic. Definition 5 (Trigger Constraint). Let c 1,…,c n ∈Ξ. A trigger constraint is a boolean formula in disjunctive normal form, c 1∧…∧c n . For instance, (QoS.p,>,100)∧(SYS.c,min,true)∧(FUN.s,eval,true) defines a trigger constraint that enacts an adaptation when the measure for the non-functional property p is not above 100, system specific property c is not the minimum (when compared to the same property of candidate configurations) and functional property s does not eval'uate to true. Prefixes are omitted when the properties provenance is clear from the context. Once a trigger constraint is violated, the adaptation is unavoidable. But choosing a suitable new configuration is a complex task. It may even be non-deterministic or lead the system to an (infinite) chain of reconfigurations. To avoid this, it is necessary to define a base strategy to direct the choice of such configurations. Formally, Definition 6 (Filter). Let \(c_{1_{1}}, \ldots, c_{1_{n}}, c_{2_{1}}, \ldots, c_{2_{m}}, \ldots, c_{k_{1}}, \ldots, c_{k_{l}} \in \Xi \). A filter is a non empty, finite sequence of finite sequences $$\langle\!\langle{c_{1},\ldots,c_{1_{n}}}\rangle, \langle{c_{2_{1}}, \ldots, c_{2_{m}}}\rangle, \ldots, \langle{c_{k_{1}}, \ldots, c_{k_{l}}}\rangle\!\rangle $$ In the sequel parenthesis are dismissed to simplify notation. The elements of a filter are separated by ' |'. A filter is used to discard, in sequence, candidate configurations that do not hold the constraint property. For example, the filter (composed of just the mandatory part) (QoS.p,>,105),(QoS.q,max,true) discards, in a first step, candidate configurations that do not deliver non-functional property p above value 105 and, in a second step, it takes (from the remaining configurations) the one that delivers the maximum value for property q. However, in some situations the filter may either discard all configurations or more than one configuration may prevail. In these cases it is possible to add optional filters to be used whenever the previous filters do not find a suitable configuration. Consider, for instance, $$(\mathsf{QoS}.p,>,105), (\mathsf{QoS}.q,\mathsf{max},\mathsf{true})|(\mathsf{QoS}.p,>,95). $$ In the case that no configuration is able to deliver a value above 105 for property p, and the second constraint is not able to pick a single configuration with a maximum value for property q, then the optional filter (the one after ' |') is applied to all the pool of configurations and it will discard those that do not deliver a value above 95 for property p. Extra optional filter elements may be added to prevent that none or more than one configuration remains. If however still multiple configurations prevail, the default is to select the first one in a ranking that contemplates the results for a prioritisation of requirements. However, for an even finer and controlled selection of a suitable configuration, constraints can be specified for each state of the RTS (c.f., Definition 4). These act as specific pre-conditions to the inclusion of the corresponding configuration in the pool of candidates. Application case: Adaptable-ASK This section illustrates the application of the adaptation approach proposed in this paper to a fragment of the ASK (Access Society's Knowledge) system. ASK is a communication software, from the Dutch company Almende, whose objective is to mediate consumers and service providers (e.g. between a company looking for a temporary worker and an available person that match such a requirement). Matching mechanisms are used to combine the interveners, according to their needs (consumers) and their profiles (providers). The business goals of the ASK system are set to deliver the best consumer-provider match in the lowest time possible. This is to maximise the users' experience and their consequent return, which is the main source of revenue. On top of this, the company wants to achieve such goals while keeping the entailing costs low. The architecture of ASK is modular, counting on three high-level components: a web-based front-end (the interface for the users), a database (that stores typical business data) and a contact engine (responsible for the matching and processing of contacts). The contact engine is the locus of the business: it collects the users' requests, converts them into tasks and processes them generating requests to an Executer component. Within the Executer, requests are enqueued into an Execution-Queue (EQ) until a HandleRequestExecution (HRE) web-service is ready to take each one and generate best-fit connections between service providers and consumers. The server running the HRE service is not dedicated, but also handles other processes. Since its task of finding and establishing the best consumer-provider connection is time and resource (mainly memory) consuming, there is a top limit of 20 HRE service instances able to run concurrently. In average, each instance of the HRE service takes 0.703s to produce an output (i.e., accepts aprox. 1.422 requests per second); this means that the server is potentially able to deal with roughly 28.440 requests per second. The EQ queue runs on a different server and is able to enqueue and dequeue at a rate of 10000 jobs per second. The coordination model for the Executer component is as simple as shown in ➊ of Fig. 7, where a and hre are ports connecting the web interface and the HRE service, respectively; the fifo channel represents the EQ queue. RTS for the Adaptable-ASK system. The ASK system was previously studied regarding performance and resource allocation, in a static perspective (Moon et al. 2011; Moon 2011). However, the system performance fluctuates according to contextual changes. In fact, from years of experience, logs and monitored data, the ASK team has learnt that during the night there is, usually, a drop of user requests, and that after lunch until mid-afternoon, such demand reaches a peak. Moreover, it was found that roughly every six months there is a slight down time on the server where the HRE web-service is hosted. In these situations, a fixed architecture and a fixed number of resources are probably the less interesting configuration for the company. Thus, adaptation plays an important role here, in an attempt to contract the right amount of system resources and defining the most appropriate behaviour for the right environmental settings. Planning adaptations The context in which the ASK system operates was studied along two axis: user requests and HRE server downtimes. As already discussed, the HRE server downtimes were observed twice per year. Therefore, the rate of failure is about 6.43×10−8 per seconda. Another important observation was that the mean time to recover from a failure was of about 10s. The user requests distribution by the (non-uniform) intervals of a day are depicted in Table 1. Table 1 Requests to the ASK system during a day Considering these values, it was possible to define configurations that would, most likely, overcome such changes on the environment. In Fig. 7 it is shown part of the RTS produced for the adaptation strategy of the Adaptable-ASK system. Configuration ➊ is the original coordination pattern resorting to one queue; it has a cost per hour of €0.47. Configuration gray!75!blackcmyk 0,0,0,0.4➋ is a scaled up version of ➊, where more memory was added to the original queue; it has a cost per hour of €0.54. Configuration gray!75!blackcmyk 0,0,0,0.4➌ is a scaled out version of ➊, where a second HRE server (with same performance) is added in such a way that both servers, connected to h r e 1 and h r e 2, execute in parallel; this configuration has a cost per hour of €0.67. Finally, configuration gray!75!blackcmyk 0,0,0,0.4➍ is a scaled up and out version of ➊, where more memory and a second server are added in such a way that both servers, connected to h r e 1 and h r e 2, execute in parallel; it has a cost per hour of € 0.74. The reconfiguration operations are represented simply as r i (for i=1..8). Their concrete details are not relevant for this discussion. Also, to enhance readability, the obvious backwards reconfigurations are omitted. Analysing RTS configurations In a simple analysis, it is possible to see how each configuration performs against the variability of the environment data. We used CooPLa (Oliveira and Barbosa 2013a) and ReCooPLa (Rodrigues et al. 2014) languages, the associated editor and its IMC Reo tool plug-in (c.f., Fig. 8) to enable such analysis. CooPLa and ReCooPLa are lightweight languages to specify coordination patterns and reconfigurations, respectively, according to the framework introduced in Section 2. Their companion editor, CooPLa editor (CooPLa Team 2014), is an Eclipse plug-in with features for edition time code completion, semantic suggestions and visualisation of coordination patterns. The IMC Reo tool is a plug-in of the editor that converts coordination patterns into IMC Reo models (Oliveira et al. 2014, 2015), which can than be converted to inputs for a range of well known quantitative analysis tools. PRISM was used in this case-study to verify the quantitative properties asserted on each configuration. The CooPLa editor and IMC Reo tool plugin. A - the main text editor with CooPLa specification of the scaled up configuration (gray!75!blackcmyk 0,0,0,0.4➋) and its stochastic instance; B - the view for the visualisation of the pattern under edition; C - the IMC Reo tool wizard, to convert the configuration into a PRISM model. A property of interest for the ASK team is the throughput ratio (TR) for the long run. This is, the ratio between the effective throughput and the maximum throughput possible. In PRISM, such a property can be formulated using the notion of rewards as follows: R{~runs~}=? [S] / T, where runs is a reward structure that assigns the value 1 to each transition that transmits data to h r e 1 (and h r e 2); and T is a variable representing the user requests. Table 2 summarises the values obtained for this property at the precise rate of user requests assigned to each hour interval. Table 2 Steady-state throughput ratio analysis for the several hour intervals The non-faulty server (NFS) and faulty-server (FS) marks are relative to experiments where, in the first one, the server connected to port h r e 1 was always available and, in the second, was constantly failing (accepting one request in each 10s); in both cases, the server on port h r e 2 (when present) was always up. The graphs in Fig. 9 provide a similar view, but now depicting an evolution of the TR property depending on the number of user requests (which vary from 0 to 30 requests per unit of time). The upper graph shows the evolution of TR for the servers without failures; the bottom one shows the same evolution considering the the presence of a faulty server, in the conditions explained before. Performance analysis of the throughput ratio property for the several configurations. Without faulty server (above) with faulty server (below). Predicting adaptations by objectives, constraints and filters Adding resources like servers and memory to the system is costly as shown by the cost per hour indicated for each configuration. Assuming that these resources are paid-per-use as in a cloud environment, it is essential to spend only the minimum required time on the proposed configurations. But delivering a service only with minimum costs in mind is not advantageous, since the obvious slowlyness of the system will alienate its customers. This brings the need for defining a suitable service level agreement (SLA) for the system. As such, the ASK team defined that an optimal value for the TR QoS property would be above 0.970b (in the sequel 0.970 is referred to as TR threshold, or t for short). This being fixed, the ASK team defined then two important properties for Adaptable-ASK: QoS.TR and SYS.cost, and based on them the following trigger constraint: $$(TR,\geq,t) \land (cost,\textsf{min}, \textsf{true}) $$ Table 3 associates the most suitable configuration to each hour interval, considering multiple adaptation objectives, defined by suitable filters. Table 3 Predicted configurations for each hour interval and associated triggering filters The top two rows are concerned with the selection of candidate configurations filtering, exclusively, by minimum cost and maximum TR value, respectively. As expected, these filters define adaptation strategies that make the system practically fixed. The top one reduces company costs, but also the TR values; the second augments the TR value (by increasing customer satisfaction), but at higher costs. The third row presents a filter that selects first the configurations delivering a TR value above the SLA threshold, and then selects the one with minimum cost. For the NFS setting (i.e., all the servers are up), the selected configurations are balanced and thus, the adaptation is more in line with the company objectives. In a situation FS (i.e., one server is continuously failing), however, there is no configuration able to deliver a TR above the desired threshold for the interval where the user requests reach a peak (i.e., 14–17). In this case, the system would not reconfigure itself. If for some reason the active configuration at that moment is ➊ or gray!75!blackcmyk 0,0,0,0.4➋, then the system would perform low (see Fig. 9, bottom graph) for a while, increasing the losses for the company. On the other hand, the fourth row extends the previous filter by adding an optional filter that selects the configuration delivering the maximum TR value, when the first filter is not able to propose a configuration. Therefore, it is now possible to have a suitable configuration for situation i i) when the users demand is higher. Since the last filter provides a balanced adaptation strategy it was chosen by the ASK team as the runtime filter that ensures the company objectives. A runtime situation At runtime, however, the environment changes are more continual and unpredictable. Therefore, the previous analysis and the adaptation strategy form only a basis for what must be finely tuned at runtime. In any case, the more accurately the analysis in the offline phase is, the better the results in the online phase will be. Since the dynamic part of the adaptation methodology proposed here is not currently implemented, we used simulation to predict how the defined adaptation strategy for the Adaptable-ASK system would behave in a real runtime situation. Thus, the system's execution for one day was simulated. It was assumed that servers will not fail along this period; and the user requests will be obtained from traces of the system, such that the average in each part is the one shown in Table 1. The results of the simulation are given in Fig. 10. Performance was evaluated at each minute, considering the current request rate and the four configurations: the active one and the three candidates. The exception is when the active configuration is gray!75!blackcmyk 0,0,0,0.4➋ or gray!75!blackcmyk 0,0,0,0.4➌, for which the candidates are only configurations ➊ and gray!75!blackcmyk 0,0,0,0.4➍c. Performance of adaptable ASK system. For one working day (above); and a zoomed view of concrete configurations (below). From the top graph in Fig. 10, we see that the first need for adaptation occurred at minute 480, which means that for the first 8 hours of the day, the system has shown a good performance while in configuration ➊. Then, in the first minutes of the 8th hour of execution, the system adapts until stabilised for the amount of requests. However, from minute 720 until minute 840 the system is constantly adapting itself. Three hours later, at minute 1020, the system adapts again for some times until it stabilises for the rest of the day. In the bottom graph of Fig. 10, we zoomed-in in a zone that spans for 20 minutes before entering the peak of requests (at minute 840) and 10 minutes during it. Before entering the peak zone, the system is able to deal with the requests in its original configuration: ➊. Notice that the second adaptation to configuration ➊ is enacted not because the system is performing below the TR threshold, but because there is a cheaper configuration that delivers similar performance. This is the intended behaviour as requested by the ASK team. However, when the users' requests augment significantly, the system performs below the TR threshold and therefore adapts to configuration gray!75!blackcmyk 0,0,0,0.4➍. In the subsequent minutes there are no adaptations even though the system performs roughly below the TR threshold. This is because (i) there are no selectable configurations after filtering and (i i), the alternative filter (TR,max, true) defined for the adaptation strategy keeps selecting configuration gray!75!blackcmyk 0,0,0,0.4➍. In this simulation, along 24 hours the system adapted 48 times, with a mean time to adapt of 1800s (i.e., 30 minutes)d. Although this seems to be a reasonable value, it may be misleading. In fact, notice that the system only adapts itself in, roughly, three parts of the day; the most critical one spanning from minute 720 to minute 840, where 75% of adaptations occur (a local mean time to adapt of 200s, or roughly 3.3 minutes). This increases the time spent in reconfigurations (for simplicity we assumed them to be instantaneous), which consequently decreases the productivity of the system. Such a situation can be mitigated by increasing the complexity of the adaptation algorithm, namely in what concerns analysis and decision. For instance, instead of choosing a configuration based on its performance on the current rate of requests, we could use the history of requests (or at least the last n rates) to predict the next one, and elaborate the decision based on the system performance for such a prediction. Also, we could resort to some notion of hysteresis to gracefully stabilise the system. For instance, this could delay the next adaptation for some time or until a cheaper configuration does not ensure a TR value above some threshold X>0.970. The latter would improve performance and, in the long run, decrease the costs (that may be associated to reconfigurations). From an economical point of view, the simulation has shown that the company would pay around € 11 per day for the system configurations and resources used. This value, compared with the one obtained if constantly using the most expensive configuration (around € 18 per day), shows that the adoption of this strategy would make the company save about € 2500 per year. While it is not a huge value, it shows that there are benefits on using this approach. Further refinements on the RTS and its constraints have potential to improve the savings. In order to keep the example simple and understandable, the coordination patterns considered in Fig. 7 are simple and omit several parts of the coordination of the whole ASK system. We deliberately set aside the use of structural properties to define system functional requirements to be preserved during the adaptation, and which could be used to rule out some candidate configurations. Moreover, being a simulation, this example has left state transfer out of the equation. The strategy for consistent state transfer subsumes imperceptible computational efforts within the whole adaptation strategy. Thus it would not affect the obtained results. The symbolic port automata in Fig. 2 are similar (up to port and state names) to those underlying the configurations considered in the example: the top one corresponds to configurations gray!75!blackcmyk 0,0,0,0.4➋ and gray!75!blackcmyk 0,0,0,0.4➍; the bottom one corresponds to configurations ➊ and gray!75!blackcmyk 0,0,0,0.4➌. For instance, the state transfer computed for Fig. 2 would also apply in a reconfiguration from configuration ➊ to gray!75!blackcmyk 0,0,0,0.4➋. Adaptation as a Service The self-adaptation strategy approach we propose in this paper can be reused in different systems since only its central pieces (properties, constraints, filters and the RTS) are system-dependent. This assures the so desired separation of concerns between managed and managing systems (Weyns et al. 2013). Such a separation is not a novelty. Most self-adaptation approaches promote it (Garlan et al. 2004); and the MAPE-K reference model almost obliges it. However, notwithstanding the separation of concerns, managed and managing systems are usually running in the same physical execution environment. This makes the managed system to decrease its performance, since the feedback loop allocates part of the available resources for its own use. A possible solution for such a problem is to physically separate both entities. This entails the need for companies to acquire more processing and storage power as well as to be willing to manage such extra resources with all the costs associated. A smoother solution is to rent virtual machines from a cloud service, and deploy therein the feedback loop system. On the one hand, this eases management, but on the other hand it requires an extra effort in order to set the whole system up. In order to avoid these problems, we propose a new strategy towards delivering adaptation as a service (AaaS). Architecture and main workflow The essential components of our feedback loop (monitor, analyser, decider and executor) are loosely coupled entities with a specific behaviour. Regarding them as services is therefore natural. With this in mind, we propose to refactor our self-adaptation strategy in Section 4, so that the essential parts of the feedback loop are deployed in the cloud for immediate usage. The expected result is that the common computational activities for adaptation (e.g., analysing data for perceiving the need for adaptation or deciding which reconfiguration to choose among a set of possible ones) are transparent to (and not developed by) the users. Fig. 11 presents the overview of the expected global architecture, along with traces of the main workflow for both users and the adaptation service. In the next paragraphs, the adaptation service will be referred to as AaaS, and the hosting cloud as AaaS cloud. Adaptation as a Service architecture overview. Actually, we take all the tasks that are known to be time and resource consuming, and encapsulate them as services. In particular, we assume the existence of online versions of established analysis tools (e.g., CADP, PRISM, IMCA, HyLoRes, among others) that make available, through public interfaces, services of which AaaS will be client. Moreover, the tools associated to the formal framework presented in Section 2 are also assumed to be available as services in a dedicated cloud environment. These two sets of services are expected to release most of the workload from the feedback loop. The feedback loop constitutes the core of the AaaS. We made it more comprehensive by supporting multiple monitoring and decider components, in an attempt of decentralising the feedback loop (André et al. 2011; Nallur and Bahsoon 2013; Vromant et al. 2011; Weyns et al. 2013). This comes with the price of extra coordination and synchronisation effort. But it is essential. For instance, instead of having a single filter-based strategy to decide reconfigurations, we can have several others, including one that uses artificial intelligence techniques (e.g., case-based reasoning) to make such a decision. The results of all decider components have to be coordinated. Only one will prevail, but such a decision will be endowed with extra robustness. AaaS is able to track more than one single system. The cloud support for multi-tenancy and the service-orientation of the approach allow AaaS to deliver the same adaptation service with the same expected quality to several systems. For this, each tracked system is given a space in a storage centre, where the RTS and the current pool of configurations are placed. AaaS remains loyal to the coordination-centred vision for reconfigurations, though. Moreover, in Section 4, we assumed that the managed systems could be distributed but their coordination layer had to be centralised. With a large-scale approach like AaaS, that assumption makes no sense. Thus, although there must be a main coordination entity for a distributed system, there can be several sub-coordination entities distributed in several nodes of the same system that are themselves tracked by AaaS. Again, this is based on the theories for feedback loop decentralisation discussed by D. Weyns et al. 2011, 2013, and consequently, requires a distributed notion of coordination-targeted reconfigurations (Koehler et al. 2009), which is out of the scope of this paper. In the sequel we exploit the offline and online phases of this cloud based approach for system adaptation. 6.1.1 The offline phase In this phase the architects have to prepare the assets (as suitable files) that make adaptation possible. This includes the system properties, that translate functional and non functional requirements; the constraints, that define the system goals for adaptation; the filters, that define the main strategy for deciding the reconfiguration to lead the system into a desired configuration; and finally the RTS. The production of the RTS is a complex and time-consuming task. To help the architects accomplishing it, the reconfiguration services assumed can be used; in particular, the IMC Reo translation service, which becomes computationally heavy as the complexity of the system coordination patterns grow. The analysis tools to fine tuning thresholds and other measures are also assumed to be used from the available services. In the end, the RTS is expected to be delivered as a comprehensive set of files written in CooPLa (for the definition of coordination patterns) and ReCooPLa (for the reconfiguration scripts). Together with the other assets, all these files have to be uploaded to the AaaS cloud through the configuration interface as depicted in Fig. 11. Once uploaded, the RTS files are transformed into a RTS model and all the associated assets (e.g., the final PRISM files) of each state are conveniently generated and stored in the storage centre. The configuration interface is expected to guide the architect through all the configuration of an instance of AaaS. Besides the upload of the required files, the architect is also able to choose, for instance, which analysis tool(s) shall be used to verify the properties of the system or which strategy(ies) shall be applied to decide the reconfigurations to apply when needed. In addition, the architect is responsible for coupling monitors to systems that are able to ship data to the AaaS cloud every time a (relevant) change occurs either in the environment or internally. The architect has also to define a local mapper component that contributes a reflection model of the managed system. A local executor component, actually an AaaS off-the-shelf component, also needs to be attached to the system. 6.1.2 The online phase When the configuration is over and the architect decides to explicitly enable AaaS to manage its system, the online phase begins. As expected, monitors ship data to the AaaS cloud, which is synchronised and merged therein. A monitor merger service is assumed to merge the monitored data and send it to the analyser service. The latter behaves exactly as before. The particularity is that it now evokes services for the necessary quantitative analysis. It is still responsible for triggering the need for a reconfiguration by analysis of the user-uploaded constraints. When an adaptation is triggered, the decider (or deciders) start the analysis to plan a new adaptation. Depending on the user configuration, one or more strategies may be associated to the managed system. Each strategy is different. For instance, the filter-based strategy uses the analysis services to analyse the configurations in the pool, which are sent as a unique workload. A strategy adopting case-based reasoning mechanisms would delegate its tasks into services to that end, but will rely on a knowledge centre to define its decision, as depicted in Fig. 11. The decider service is also responsible for updating the pool of configurations, as explained in Section 4. Upon decision, the chosen reconfiguration is passed to the executor. The executor translates the reconfiguration into a script able to concretely apply the changes to the managed system. This script is passed to the local executor component. The latter uses the reconfiguration services to compute the resuming state, and when the system enters a quiescent state, applies the changes via the mapper component. The option of having a local executor component is due to AaaS being not aware of the internal state of the systems it manages. Thus, interrupted and resuming states have to be computed locally. This is also necessary because such states have to be computed in the instant before the changes are applied to the system, so that the managed system consistently resumes its production. The AaaS approach brings several benefits when compared to traditional approaches. It promotes a clear (physical) separation between the managed system and its feedback loop. It allows architects and developers to focus on the design and development of the system and, consequently, frees them from dealing with the always complex implementation of feedback loop components. It eases the evolution of legacy static systems into self-adaptable ones and allows for more comprehensive and robust decisions, by enabling the combination of several strategies. Moreover, it enables the decentralisation of the feedback loop, augmenting the dependability of the system as a whole. AaaS is a one-size-fits-all approach for adaptation. This can be seen as a drawback, but in fact the approach is highly configurable in order to support the demands of their tenant systems. In fact, the adaptation logic is mainly delivered by the architects in the uploaded analysable assets. AaaS essentially performs intense computations in order to deliver decisions based on such assets. The adaptation logic is not static. At any time the company may change its goals or the system requirements, or the architects may update the RTS to cope with new system configurations. This entails the need for re-uploading new asset files. The AaaS is expected to reconfigure its behaviour to conform to these changes immediately. Moreover, the customisation of AaaS behaviour can be performed at any time, as well. Although AaaS service is configured by the architects, behind the scenes, a local feedback loop will ensure the correct work of each AaaS instance monitoring each client system. This will enable necessary adaptations when, for instance, some component of an AaaS instance fails to respond or when the AaaS infrastructure needs to enlarge its computational power for continuously ensuring correct load balancing. The approach, however, has limited applicability in time critical software systems or in application highly distributed by mobile devices. In the first case latency may impair real timeliness. In the second, because mobile networks are often unstable. However, surveillance systems, asynchronous communication systems (like ASK) and many others that may adapt to context changes but are not time critical, would benefit from such an infrastructure. Usually deployed in environments with a stable network infrastructure, these systems are able to exchange data with the remote servers of AaaS, and perform the necessary computation for adaptability. Our proposal of a self-adaptation strategy mainly focuses on two aspects. The first one concerns the reconfigurations of the coordination layer and their planning/organisation in a relational structure. The second one is the integration of a formal framework in a feedback loop, allowing for detecting, deciding and triggering adaptations. Several approaches to implement feedback loops for self-adaptive systems are reported in the literature. In general, these approaches agree on external, reusable and component-based feedback loops implementations, rather than on internal, monolithic, and intertwined implementations (Cheng et al. 2009; Huebscher and McCann 2008; Salehie and Tahvildari 2009). How adaptations are decided and which assets are used to aid in such decisions differ from case to case. In the sequel we compare our approach with other works along three dimensions: i) quantitative analysis; ii) design, detection and selection of adaptations; and iii) the use of models as system-knowledge artefacts for the feedback loop implementation. Adaptations and quantitative analysis In (Calinescu and Kwiatkowska 2009; Calinescu et al. 2012), the authors present a framework for the adaptation of software systems, where system components are modelled as Markov chains. The framework takes advantage of quantitative model checking, using PRISM, to analyse the components and dynamically adjust the system to its objectives and the changes in the environment. Specific policies are used to define constraints to which the system should agree or measures of success that it must optimise. Adaptations are made on the configurable parameters of the system that realise the policies. Our approach shares with this one the use of quantitative model checkers (e.g., PRISM) to analyse the system. However, we focus on the coordination layer and use (interactive) Markov chains to analyse it, rather than the components themselves. Moreover, the adaptations we assume are made to the structure of the coordination and not to the parameters of the system. Another approach, documented in reference (Becker et al. 2013) is based on simulation of a specific-modelled system to gradually find a suitable point to trigger adaptations and consequently to fulfil system requirements. This is done for a range of possible (static) contexts and through multiple design iterations. Similarly, we analyse possible system coordination configurations, but instead of proposing a single design, we propose a relational structure that captures several designs, which are likely to perform well against contextual variability. Tools for performance analysis (Becker et al. 2009; Bondarev et al. 2006; Grassi et al. 2009, 2007) that take into account the performance of the original system and may integrate part of a feedback loop component to enact adaptations, should also be mentioned. Languages for adaptation specification In (Huber et al. 2014) the authors propose the S/T/A domain-specific modelling language to describe runtime adaptation processes on top of QoS models of component-based system architectures. S/T/A is used to define strategies, tactics and actions for adaptations. Strategies define system goals; tactics define how to proceed on an adaptation; and actions are the atomic elements that change the system configuration. Weights are assigned to tactics, after simulation, to define the impact of applying them to the running system. Then, they can be used by strategies to determine which tactic to apply next. Our approach also defines strategies (referred to as trigger constraints), tactics (reconfigurations) and actions (which are seen as primitive reconfiguration operations). Differently from this approach, we do not base the choice of a reconfiguration only on its impact. Instead, we concretely define filters to select the most appropriate reconfiguration for the current environment settings. Reference (Cheng and Garlan 2012) introduces Stitch, a language to define strategies and tactics. Each tactic defines a condition for its applicability, a set of actions (that apply changes to the system) and a set of effects, which may be regarded as adaptation post-conditions. A strategy encapsulates an adaptation processes, by using tactics in a deterministic if-then approach. In this paper we do not present a language to concretely define the triggering of adaptations and the selection of the most appropriate reconfiguration. However, we define constraints and filters, which detects and enacts adaptations based on the general objectives of the system and not on specific events. Other languages like Acme (Garlan et al. 1997), Wright (Allen 1997) or YAWL (van der Aalst and ter Hofstede 2005) allow for the specification of adaptations. However, their specific use as ADLs or workflow languages, limit their use in the specification of proper adaptation strategies. Models in adaptation approaches Models are used extensively as part of feedback loop implementation strategies. They usually convey the architecture of a running system at a level of abstraction suitable for analysis. In (Garlan et al. 2004), notions of architecture style, invariants, operators and properties are used to define strategies of adaptation, where invariants are checked upon a model of the system that is seen as an abstract graph of computational elements, upon which behaviour and specific properties are defined. In (Litoiu et al. 2008), the authors propose an adaptation strategy to guarantee web services quality. In particular, they propose a control loop implementation that is based on a model of the web service and a robust estimator, used to keep the QoS values in accordance to the SLA. In (Floch et al. 2013; Hallsteinsen et al. 2012), MUSIC is presented as a framework for model driven development of (component and SOA-based) adaptable mobile applications in the context of ubiquitous computing. It relies on models of the context and of the application architecture; the latter being annotated with application adaptation capabilities and its dependencies to the context. Moreover, it instantiates the MAPE-K architecture and uses a reasoner to search the set of possible configurations for the optimal solution in the current context. When an adaptation is required, a reconfiguration script is derived and executed. How the best configuration is determined, concerning QoS and the SLA, is not clearly reported, however. In (Agrawal et al. 2003; Fischer et al. 2000), UML is used along with graph transformation techniques to define the adaptation of systems. In this approach, performance analysis is not natural, but checking behavioural and structural properties bedomes easier using constraint languages like OCL. Nevertheless, all of these approaches use ad hoc models. Our approach, on the other hand, resorts to a generic graph-based model that may borrow structure and behaviour from formal models like Reo, later transformed into (interactive) Markov chains. Also, to the best of our knowledge, this is the first attempt of issuing a self-adaptation strategy for software systems with the focus on the analysis and adaptation of the coordination layer that leads the global system architecture. Decentralised self-adaptation Decentralised approaches for self-adaptation use several feedback loops (or several of its components) to control a system (typically complex and distributed) (Vromant et al. 2011; Weyns et al. 2013). In (Caprarescu and Petcu 2009) the authors, inspired from natural adaptive systems, propose a robust feedback loop for computational systems. Multi-agent technology and swarm intelligence is used to define decentralised feedback loops that mimic ant colonies. The authors stress that use of multiple feedback loop agents enables robustness, for when one agent fails, the others may continue by enacting self-organisation. In reference (André et al. 2011) it is proposed a framework (SAFDIS) for adaptation of distributed service-based applications, that is fully decentralised. Feedback loops are regarded as independent applications, external to the managed systems which they control. Each such loop adapts the associated members of the distributed managed application. Cooperation via coordination and negotiation is, however, part of the decision making algorithm. Moreover, SAFDIS is implemented as a SOA system, enabling its components to be used as services by developers and architects. MOSES (Cardellini et al. 2012) is proposed as a methodology to support QoS-driven adaptation of service-oriented systems. In particular, it acts as a service broker in order to provide the best selection and binding of services for a suitable description of the system architecture and its companion non-functional requirements. Decentralisation occurs at the monitor level. Several monitors collect data about QoS of regionally distributed pools of services, that are candidates for binding to the managed system. The remaining tasks of the MAPE-K reference model are centralised, though. In (Nallur and Bahsoon 2013) the authors focus on a market-oriented programming strategy to define adaptation strategies. They consider several market places where seller services offer their QoS attributes for some cost, and buyer applications bid for services with a desired QoS and the price willing to pay for such service. Markets, as distributed places, make the approach decentralised, since several decider agents have to work in each market for a suitable solution. Although SAFDIS framework (André et al. 2011) being, however, close to our proposal, none of the above decentralised approaches for feedback loops intends to deliver adaptation as a service. The paper discussed an architectural adaptation strategy for systems able to self-adapt in accordance to the surrounding environment. It is based on two phases: one offline, where reconfigurations are planned and organised; and another online, that takes advantage of such organisation of reconfigurations to autonomously choose one and adapt the system, as part of a monitoring feedback loop. This strategy acts on top of a concrete framework that allows the software architect to model and apply reconfigurations and to formally verify and reason about functional and non-functional (quantitative/probabilistic) requirements of the system architecture. We highlight the use of formal models to represent the coordination layer of a software system. Through source-to-source transformation techniques these models are transformed into suitable quantitative models, enabling runtime verification of both non-functional and functional requirements of the system. This plays a crucial role in triggering adaptations, and, in general, in the maintenance of software architecture quality, and system consistency upon dynamic reconfigurations. The use of formal methods, in contrast to other approaches commonly employed by practitioners (e.g. UML, rule-based, etc.) allows for a precise specification of patterns, reconfigurations and properties, as well as their verification through appropriate tools. A slighter heavy, and certainly less usual notation is a price to be paid. Nevertheless both the CooPLa (Oliveira and Barbosa 2013a) and ReCooPLa (Rodrigues et al. 2014) editors, that support architectural design in this framework, and the plugged-in verifiers, have user-friendly interfaces and are relatively easy to use. In any case there is no alternative in Software Engineering to the road towards increased precision and rigour. Based on this adaptation architecture, the paper also proposes a cloud-based implementation of a feedback loop that is transparent to the users and delivers adaptation as a service. Among several advantages, we highlight the fact that it frees the users to actually develop such a feedback loop, and gives them total control of how the system shall evolve in each situation, by enabling a fully configurable cloud environment. Currently, we are developing a prototype implementation of the approach introduced in Section 4 on top of the reconfiguration framework mentioned in Section 2. Moreover, we are studying how the RTS model can be delivered as a weighted automata where the edges are labelled with reconfigurations and their application costs. Weighted automata theory would allow for addressing overall reconfiguration properties like, for instance, "in one year, the overall time spent on reconfigurations shall remain below 120s". A complex problem is still to solve, though. As pointed out in the SBCARS'2014 session, we are naively assuming that each RTS state has a small number of transitions. Scalability issues would arise if that number grows bigger; meaning more configurations in the pool and consequently more time doing heavy quantitative analysis and arriving to a decision. A possible solution will resort to RTS-specific bisimulation techniques in order to minimise that structure. a The mean time between failure (MTBF) QoS attribute of the server is consequently set to 15552000s=(360∗24∗60∗60)/2. b For the purposes of this paper, the SLA of the ASK system is comprised only of this TR property and its derivates, like throughput, latency or response time. c Remember that inverse reconfigurations are omitted in the RTS of Fig. 7 but assumed to exist. d Notice that reconfigurations were assumed to take effect in a negligible (near to instantaneous) amount of time. Agrawal, A, Karsai G, Shi F (2003) A UML-based graph transformation approach for implementing domain-specific model transformations. Int J Softw Syst Modeling1–19. Allen, R (1997) A formal approach to software architecture. PhD thesis, Carnegie Mellon, School of Computer Science, Pittsburgh, PA, USA. (January 1997). CMU Technical Report CMU-CS-97-144. André, F, Daubert E, Gauvrit G (2011) Distribution and self-adaptation of a framework for dynamic adaptation of services In: The Sixth International Conference on Internet and Web Applications and Services (ICIW), 16–21.. IARIA, Red Hook, NY, USA. Arbab, F (2004) Reo: A channel-based coordination model for component composition. Math Struct Comp Sci 14(3): 329–366. MATH MathSciNet Article Google Scholar Basu, A, Bensalem S, Bozga M, Combaz J, Jaber M, Nguyen TH, Sifakis J (2011) Rigorous Component-Based system design using the BIP framework. Software IEEE 28(3): 41–48. Becker, M, Luckey M, Becker S (2013) Performance analysis of self-adaptive systems for requirements validation at design-time In: Proceedings of the 9th QoSA '13, 43–52.. ACM, New York, NY, USA. Becker, S, Koziolek H, Reussner R (2009) The palladio component model for model-driven performance prediction. J Syst Softw 82(1): 3–22. Blackburn, P (2000) Representation, reasoning, and relational structures: a hybrid logic manifesto. Logic J IGPL 8(3): 339–365. Blackburn, P, de Rijke M, Venema Y (2001) Modal Logic. Cambridge Tracts in Theoretical Computer Science (53). Cambridge University Press, Cambridge. Bondarev, E, Chaudron M, With P (2006) A process for resolving performance Trade-Offs in Component-Based architectures In: Component-Based Software Engineering. Lecture Notes in Computer Science, vol. 4063, 254–269.. Springer, Berlin, Heidelberg. Bonsangue, M, Clarke D, Silva A (2012) A model of context-dependent component connectors. Science Comput Programm 77(6): 685–706. MATH Article Google Scholar Brauner, T (2010) Hybrid Logic and Its Proof-Theory. Applied Logic Series. Springer, Berlin, Heidelberg. Brun, Y, Serugendo GM, Gacek C, Giese H, Kienle H, Litoiu M, Müller H, Pezzè M, Shaw M (2009) Engineering Self-Adaptive systems through feedback loops In: Software Engineering for Self-Adaptive Systems. Lecture Notes in Computer Science, vol. 5525, 48–70.. Springer, Berlin, Heidelberg. Calinescu, R, Kwiatkowska M (2009) Using quantitative analysis to implement autonomic IT systems In: Proceedings of ICSE'09, 100–110.. IEEE Computer Society, Washington, DC, USA. Calinescu, R, Ghezzi C, Kwiatkowska M, Mirandola R (2012) Self-adaptive software needs quantitative verification at runtime. Commun ACM 55(9): 69–77. Caprarescu, BA, Petcu D (2009) A Self-Organizing feedback loop for autonomic computing In: Future Computing, Service Computation, Cognitive, Adaptive, Content, Patterns, 2009. COMPUTATIONWORLD '09. Computation World:, 126–131.. IEEE Computer Society, Washington, DC, USA. Cardellini, V, Casalicchio E, Grassi V, Iannucci S, Lo Presti F, Mirandola R (2012) MOSES: A framework for QoS driven runtime adaptation of Service-Oriented systems. IEEE Trans Softw Eng 38(5): 1138–1159. Cheng, BH, Lemos R, Giese H, Inverardi P, Magee J (2009) Software Engineering for Self-Adaptive Systems: A Research Roadmap In: Software Engineering for Self-Adaptive Systems. Lecture Notes in Computer Science, vol. 5525, 1–26.. Springer, Berlin, Heidelberg. Cheng, SW, Garlan D (2012) Stitch: A language for architecture-based self-adaptation. J Syst Softw 85(12): 2860–2875. Ciraci, S, van den Broek P (2006) Evolvability as a quality attribute of software architectures In: Proceedings of the International ERCIM Workshop on Software Evolution, 29–31.. UMH, Mons. Dobson, S, Denazis S, Fernández A, Gaïti D, Gelenbe E, Massacci F, Nixon P, Saffre F, Schmidt N, Zambonelli F (2006) A survey of autonomic communications. ACM Trans Auton Adapt Syst 1(2): 223–259. Fischer, T, Niere J, Torunski L, Zündorf A (2000) Story Diagrams: A New Graph Rewrite Language Based on the Unified Modeling Language and Java In: Theory and Application of Graph Transformations. Lecture Notes in Computer Science, vol 1764, 296–309.. Springer, Berlin, Heidelberg. Chap. 21. Floch, J, Frà C, Fricke R, Geihs K, Wagner M, Lorenzo J, Soladana E, Mehlhase S, Paspallis N, Rahnama H, Ruiz PA, Scholz U (2013) Playing MUSIC – building context-aware and self-adaptive mobile applications. SPE 43(3): 359–388. Garavel, H, Lang F, Mateescu R, Serwe W (2012) CADP 2011: a toolbox for the construction and analysis of distributed processes. Int J Softw Tools Technol Transfer 15(2): 89–107. Garlan, D, Monroe RT, Wile D (1997) ACME: An Architecture Description Interchange Language In: Proceedings of CASCON'97, 169–183.. IBM Press, Cranbury, NJ, USA. Garlan, D, Schmerl B, Cheng SW (2009) Software Architecture-Based Self-Adaptation. In: Zhang Y, Yang LT, Denko MK (eds)Autonomic Computing and Networking, 31–55.. Springer, US. Chap. 2. Garlan, D, Cheng SW, Huang AC, Schmerl B, Steenkiste P (2004) Rainbow: Architecture-Based Self-Adaptation with reusable infrastructure. Computer 37(10): 46–54. Gat, E (1998) Three-layer architectures. In: Kortenkamp D, Bonasso RP, Murphy R (eds)Artificial Intelligence and Mobile Robots, 195–210.. MIT Press, Cambridge, MA, USA. Grassi, V, Mirandola R, Randazzo E (2009) Model-Driven assessment of QoS-aware Self-Adaptation In: Software Engineering for Self-Adaptive Systems. Lecture Notes in Computer Science, vol. 5525, 201–222.. Springer, Berlin, Heidelberg. Grassi, V, Mirandola R, Sabetta A (2007) A model-driven approach to performability analysis of dynamically reconfigurable component-based systems In: Proceedings of WOSP '07, 103–114.. ACM, New York, NY, USA. Guck, D, Han T, Katoen JP, Neuhäußer MR (2012) Quantitative timed analysis of interactive markov chains. In: Goodloe AE Person S (eds)NASA Formal Methods. Lecture Notes in Computer Science, vol. 7226, 8–23.. Springer, Berlin, Heidelberg. Hallsteinsen, S, Geihs K, Paspallis N, Eliassen F, Horn G, Lorenzo J, Mamelli A, Papadopoulos GA (2012) A development framework and methodology for self-adapting applications in ubiquitous computing environments. J Syst Softw 85(12): 2840–2859. Hermanns, H (2002) Interactive Markov Chains: The Quest for Quantified Quality. Lecture Notes in Computer Science, Vol. 2428. Springer, Berlin, Heidelberg. Hermanns, H, Katoen JP (2010) The how and why of interactive markov chains In: Proceedings of FMCO'09. Lecture Notes in Computer Science, vol. 6286, 311–337.. Springer, Berlin, Heidelberg. Hnětynka, P, Plášil F (2006) Dynamic reconfiguration and access to services in hierarchical component models Component-Based software engineering In: Component-Based Software Engineering. Lecture Notes in Computer Science, vol. 4063, 352–359.. Springer, Berlin, Heidelberg. Chap. 27. CooPLa Team, CooPLa Editor (2014). http://coopla.di.uminho.pt Huber, N, Hoorn A, Koziolek A, Brosig F, Kounev S (2014) Modeling run-time adaptation at the system architecture level in dynamic service-oriented environments. Serv Oriented Comput Appl 8(1): 73–89. Huebscher, MC, McCann JA (2008) A survey of autonomic computing—degrees, models, and applications. ACM Comput Surv 40(3): 1–28. IBM Corp (2004) An Architectural Blueprint for Autonomic Computing. IBM Corp, USA. Kephart, JO, Chess DM (2003) The vision of autonomic computing. Computer 36(1): 41–50. Koehler, C, Arbab F, Vink E (2009). In: Corradini A Montanari U (eds)Reconfiguring Distributed Reo Connectors. Lecture Notes in Computer Science, vol 5486, 221–235.. Springer, Berlin, Heidelberg. Kramer, J, Magee J (1990) The evolving philosophers problem: Dynamic change management. IEEE Trans Softw Eng 16(11): 1293–1306. Krause, C (2011) Reconfigurable component connectors. PhD thesis, Leiden University, Amsterdam, The Netherlands. Kwiatkowska, M, Norman G, Parker D (2010) A framework for verification of software with time and probabilities In: Proceedings of FORMATS'10. Lecture Notes in Computer Science, vol. 6246, 25–45.. Springer, Berlin, Heidelberg. Litoiu, M, Mihaescu M, Ionescu D, Solomon B (2008) Scalable adaptive web services In: Proceedings of SDSOA '08, 47–52.. ACM, New York, NY, USA. Losavio, F, Chirinos L, Lévy N, Ramdane-Cherif A (2003) Quality characteristics for software architecture. J Object Technol 2(2): 133–150. Moon, Y, Arbab F, Silva A, Stam A, Verhoef C (2011) Stochastic Reo: a case study In: Proceedings of the 5th International Workshop on Harnessing Theories for Tool Support in Software (TTSS '11), 1–16, Oslo, Norway. Moon, YJ (2011) Stochastic models for quality of service of component connectors. PhD thesis, Universiteit Leiden. Moon, YJ, Silva A, Krause C, Arbab F (2014) A compositional model to reason about end-to-end QoS in stochastic Reo connectors. Sci Comput Programm 80: 3–24. Nallur, V, Bahsoon R (2013) A decentralized self-adaptation mechanism for service-based applications in the cloud. Softw Eng IEEE Trans 39(5): 591–612. Nilsson, NJ (1980) Principles of Artificial Intelligence. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. MATH Google Scholar Oliveira, N, Barbosa LS (2013a) On the reconfiguration of software connectors In: Proceedings of SAC'2013, vol 2, 1885–1892.. ACM, New York, NY, USA. Oliveira, N, Barbosa LS (2013b) Reconfiguration mechanisms for service coordination. In: her Beek MH Lohmann N (eds)Web Services and Formal Methods. Lecture Notes in Computer Science, vol. 7843, 134–149.. Springer, Berlin, Heidelberg. Oliveira, N, Barbosa LS (2014) A self-adaptation strategy for service-based architectures In: VIII Brazilian Symposium on Software Components, Architectures and Reuse. SBCARS'2014, vol. 2, 44–53.. SBC - Brazilian Computer Society, Porto Alegre, RS, Brazil. Oliveira, N, Silva A, Barbosa LS (2014) Quantitative analysis of Reo-based service coordination In: Proceedings of SAC'14, 1247–1254.. ACM, New York, NY, USA. Oliveira, N, Silva A, Barbosa LS (2015) IMCReo: interactive Markov chains for stochastic Reo. J Internet Serv Inform Secur 5(1): 3–28. Oreizy, P, Gorlick MM, Taylor RN, Heimhigner D, Johnson G, Medvidovic N, Quilici A, Rosenblum DS, Wolf AL (1999) An architecture-based approach to self-adaptive software. Intell Syst Appl 14(3): 54–62. Rodrigues, F, Oliveira N, Barbosa LS (2014) 3rd Symposium on Languages, Applications and Technologies. OpenAccess Series in Informatics (OASIcs), vol 38. In: Pereira MJV, Leal JP, Simões A (eds), 61–76.. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, Dagstuhl, Germany. Salehie, M, Tahvildari L (2009) Self-adaptive software: Landscape and research challenges. ACM Trans Auton Adapt Syst 4(2): 1–42. van der Aalst, WMP, ter Hofstede AHM (2005) YAWL: yet another workflow language. Inform Syst 30(4): 245–275. Villegas Machado, NM, Müller HA, Tamura Morimitsu G (2011) On designing Self-Adaptive software systems. Sistemas & Telemática 9(18): 29–51. Vromant, P, Weyns D, Malek S, Andersson J (2011) On interacting control loops in self-adaptive systems In: Proceedings of the 6th International Symposium on Software Engineering for Adaptive and Self-Managing Systems. SEAMS '11, 202–207.. ACM, New York, NY, USA. Wermelinger, MA (1999) Specification of software architecture reconfiguration. PhD thesis, Universidade Nova de Lisboa, Lisboa, Portugal. Weyns, D, Schmerl B, Grassi V, Malek S, Mirandola R, Prehofer C, Wuttke J, Andersson J, Giese H, Göschka K (2013) On patterns for decentralized control in Self-Adaptive systems. In: de Lemos R, Giese H, Müller H, Shaw M (eds)Software Engineering for Self-Adaptive Systems II. Lecture Notes in Computer Science, vol. 7475, 76–107.. Springer, Berlin Heidelberg. We would like to thank the SBCARS'2014 reviewers and conference participants for the questions raised, which we have tried to address here. This work is funded by ERDF - European Regional Development Fund through the COMPETE Programme (operational programme for competitiveness) and by National Funds through FCT, the Portuguese Foundation for Science and Technology, within project FCOMP-01-0124-FEDER-028923. Author Nuno Oliveira was supported by a Doctoral Grant from FCT, with reference SFRH/BD/71475/2010. HASLab - INESC TEC & Universidade do Minho, Braga, Portugal Nuno Oliveira & Luís S Barbosa Nuno Oliveira Luís S Barbosa Correspondence to Nuno Oliveira. NO developed the adaptation strategies with all associated techniques, designed and conducted the case study and drafted the manuscript. LSB discussed the obtained results and thoroughly revised the paper. All authors read and approved the final manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Oliveira, N., Barbosa, L.S. Self-adaptation by coordination-targeted reconfigurations. J Softw Eng Res Dev 3, 6 (2015). https://doi.org/10.1186/s40411-015-0021-2 Self-adaptive software Reconfiguration Software coordination Service-oriented architectures SBCARS 2014 (Brazilian Symposium on Software Components, Architectures and Reuse)
CommonCrawl
Castelnuovo-Mumford regularity and Bridgeland stability of points in the projective plane by Izzet Coskun, Donghoon Hyeon and Junyoung Park PDF Proc. Amer. Math. Soc. 145 (2017), 4573-4583 Request permission In this paper, we study the relation between Castelnuovo-Mumford regularity and Bridgeland stability for the Hilbert scheme of $n$ points on $\mathbb {P}^2$. For the largest $\lfloor \frac {n}{2} \rfloor$ Bridgeland walls, we show that the general ideal sheaf destabilized along a smaller Bridgeland wall has smaller regularity than one destabilized along a larger Bridgeland wall. We give a detailed analysis of the case of monomial schemes and obtain a precise relation between the regularity and the Bridgeland stability for the case of Borel fixed ideals. Daniele Arcara and Aaron Bertram, Bridgeland-stable moduli spaces for $K$-trivial surfaces, J. Eur. Math. Soc. (JEMS) 15 (2013), no. 1, 1–38. With an appendix by Max Lieblich. MR 2998828, DOI 10.4171/JEMS/354 Daniele Arcara, Aaron Bertram, Izzet Coskun, and Jack Huizenga, The minimal model program for the Hilbert scheme of points on $\Bbb {P}^2$ and Bridgeland stability, Adv. Math. 235 (2013), 580–626. MR 3010070, DOI 10.1016/j.aim.2012.11.018 Arend Bayer and Emanuele Macrì, The space of stability conditions on the local projective plane, Duke Math. J. 160 (2011), no. 2, 263–322. MR 2852118, DOI 10.1215/00127094-1444249 Arend Bayer and Emanuele Macrì, Projectivity and birational geometry of Bridgeland moduli spaces, J. Amer. Math. Soc. 27 (2014), no. 3, 707–752. MR 3194493, DOI 10.1090/S0894-0347-2014-00790-6 Tom Bridgeland, Stability conditions on $K3$ surfaces, Duke Math. J. 141 (2008), no. 2, 241–291. MR 2376815, DOI 10.1215/S0012-7094-08-14122-5 Izzet Coskun and Jack Huizenga, Interpolation, Bridgeland stability and monomial schemes in the plane, J. Math. Pures Appl. (9) 102 (2014), no. 5, 930–971 (English, with English and French summaries). MR 3271294, DOI 10.1016/j.matpur.2014.02.010 David Eisenbud, Commutative algebra, Graduate Texts in Mathematics, vol. 150, Springer-Verlag, New York, 1995. With a view toward algebraic geometry. MR 1322960, DOI 10.1007/978-1-4612-5350-1 David Eisenbud, The geometry of syzygies, Graduate Texts in Mathematics, vol. 229, Springer-Verlag, New York, 2005. A second course in commutative algebra and algebraic geometry. MR 2103875 Giuliana Fatabbi, Regularity index of fat points in the projective plane, J. Algebra 170 (1994), no. 3, 916–928. MR 1305270, DOI 10.1006/jabr.1994.1370 Brendan Hassett and Donghoon Hyeon, Log minimal model program for the moduli space of stable curves: the first flip, Ann. of Math. (2) 177 (2013), no. 3, 911–968. MR 3034291, DOI 10.4007/annals.2013.177.3.3 Dieter Happel, Idun Reiten, and Sverre O. Smalø, Tilting in abelian categories and quasitilted algebras, Mem. Amer. Math. Soc. 120 (1996), no. 575, viii+ 88. MR 1327209, DOI 10.1090/memo/0575 Jack Huizenga, Effective divisors on the Hilbert scheme of points in the plane and interpolation for stable bundles, J. Algebraic Geom. 25 (2016), no. 1, 19–75. MR 3419956, DOI 10.1090/jag/652 Chunyi Li and Xiaolei Zhao, The MMP for deformations of Hilbert schemes of points on the projective plane, 2013, arXiv:1312.1748v1 [math.AG]. Retrieve articles in Proceedings of the American Mathematical Society with MSC (2010): 14C05, 13D02, 14D20, 13D99, 14D99, 14C99 Retrieve articles in all journals with MSC (2010): 14C05, 13D02, 14D20, 13D99, 14D99, 14C99 Izzet Coskun Affiliation: Department of Mathematics, Statistics and Computer Science, University of Illinois at Chicago, Chicago, Illinois 60607 MR Author ID: 736580 Email: [email protected] Donghoon Hyeon Affiliation: Department of Mathematical Sciences, Seoul National University, Seoul, Republic of Korea Email: [email protected] Junyoung Park Affiliation: Department of Mathematics, POSTECH, Pohang, Gyungbuk, Republic of Korea Email: [email protected] Received by editor(s): February 22, 2016 Received by editor(s) in revised form: September 3, 2016 Published electronically: July 27, 2017 Additional Notes: The first author was partially supported by the NSF CAREER grant DMS-0950951535 and the NSF grant DMS-1500031 The second author was supported by the following grants funded by the government of Korea: NRF grant 2011-0030044 (SRC-GAIA) and NRF grant NRF-2013R1A1A2010649 Communicated by: Lev Borisov Journal: Proc. Amer. Math. Soc. 145 (2017), 4573-4583 MSC (2010): Primary 14C05, 13D02, 14D20; Secondary 13D99, 14D99, 14C99 DOI: https://doi.org/10.1090/proc/13470
CommonCrawl
Basic critical care echocardiography training of intensivists allows reproducible and reliable measurements of cardiac output Christian Villavicencio ORCID: orcid.org/0000-0001-5539-50871, Julen Leache1, Judith Marin2, Iban Oliva1, Alejandro Rodriguez1, María Bodí1 & Nilam J. Soni3,4,5 The Ultrasound Journal volume 11, Article number: 5 (2019) Cite this article Although pulmonary artery catheters (PACs) have been the reference standard for calculating cardiac output, echocardiographic estimation of cardiac output (CO) by cardiologists has shown high accuracy compared to PAC measurements. A few studies have assessed the accuracy of echocardiographic estimation of CO in critically ill patients by intensivists with basic training. The aim of this study was to evaluate the accuracy of CO measurements by intensivists with basic training using pulsed-wave Doppler ultrasound vs. PACs in critically ill patients. Critically ill patients who required hemodynamic monitoring with a PAC were eligible for the study. Three different intensivists with basic critical care echocardiography training obtained three measurements of CO on each patient. The maximum of three separate left-ventricular outflow tract diameter measurements and the mean of three LVOT velocity time integral measurements were used. The inter-observer reliability and correlation of CO measured by PACs vs. critical care echocardiography were assessed. A total of 20 patients were included. Data were analyzed comparing the measurements of CO by PAC vs. echocardiography. The inter-observer reliability for measuring CO by echocardiography was good based on a coefficient of intraclass correlation of 0.6 (95% CI 0.48–0.86, p < 0.001). Bias and limits of agreement between the two techniques were acceptable (0.64 ± 1.18 L/min, 95% limits of agreement of − 1.73 to 3.01 L/min). In patients with CO < 6.5 L/min, the agreement between CO measured by PAC vs. echocardiography improved (0.13 ± 0.89 L/min; 95% limits of agreement of − 1.64 to 2.22 L/min). The mean percentage of error between the two methods was 17%. Critical care echocardiography performed at the bedside by intensivists with basic critical care echocardiography training is an accurate and reproducible technique to measure cardiac output in critically ill patients. Cardiac output (CO) is the reference standard measurement for assessing target organ perfusion and oxygen delivery in shock. Assessing CO in critically ill patients allows physicians to determine hemodynamic status, identify the most appropriate therapeutic strategy, and monitor the effects of therapy. Insertion of a pulmonary artery catheter (PAC) has been historically required to calculate CO by thermodilution [1]. However, routine use of PACs in patients with shock is no longer recommended, except in those patients presenting refractory shock, cardiogenic shock, or right-ventricular dysfunction [2]. In recent years, there has been increasing interest to develop non-invasive or minimally invasive technologies to measure CO. Among them, critical care echocardiography (CCE) has emerged as a promising technique that is commonly available, less expensive, and non-invasive (transthoracic echocardiography), or minimally invasive (transesophageal echocardiography) [3, 4]. In stable patients, estimation of CO by CCE has been shown to be accurate when compared to the standard thermodilution technique using a PAC [5,6,7]. A few studies have compared the accuracy of these techniques in critically ill patients [8], likely due to limited ability to acquire high-quality images in critically ill patients [9]. Despite this, technological advancements are making it easier to obtain high-quality images, and as recommendations on appropriate use of CCE in intensive-care units (ICUs) have emerged [10,11,12,13], CCE has become standard practice in many ICUs to evaluate cardiac function. The primary objective of this study was to compare CO measured by intensivists with basic CCE skills using pulsed-wave Doppler (PWD) vs. PAC in critically ill patients. The secondary objective was to evaluate the inter-observer reliability of PWD-CO measured amongst intensivists with basic CCE skills, as well as identify factors associated with difficult acquisition of PWD-CO measurements with CCE. We performed an observational study in a 30-bed medical ICU at Joan XXIII University Hospital in Spain. Approval was obtained from the Joan XXIII University hospital Ethics Committee (IRB # 88/2013), and the study was considered to present minimal risk to subjects. Informed consent was obtained from each subject or their next of kin. Critically ill patients who required hemodynamic monitoring and were admitted to the ICU were eligible for enrollment from May 2013 to May 2015. Additional eligibility criteria included age > 18, monitoring with a PAC, and interpretable images acquired by CCE. Exclusion criteria included a medical history of congenital heart disease, severe tricuspid regurgitation, severe aortic regurgitation, aortic stenosis, pregnancy, and atrial fibrillation. CO measurements were acquired independent of the subject's medical and nursing care, and investigators did not change medical management based on findings of this study. Before study enrolment, three intensivists were trained to measure CO with a portable ultrasound machine by attending a CCE course that included 10 h of didactics and 4 h of hands-on instruction on acquisition of high-quality parasternal long-axis and apical 5-chamber views. Training also included 10 h of didactics and 6 h of hands-on instruction for advanced cardiac training to learn how to use cardiac software to measure left-ventricular outflow tract diameter (LVOTd) and the left-ventricular outflow tract velocity time integral (VTI). Study protocol and data measurements Subjects were enrolled during the first 24 h of being invasively monitored with a PAC. Decision to insert a PAC was at the discretion of the treating physician. The following demographic, clinical, and physiologic data were collected: age, sex, weight, height, heart rate (HR), central venous pressure (CVP), mean arterial blood pressure (MAP), Acute Physiology and Chronic Health Evaluation II score (APACHE II) [14], the Sequential Organ-Failure Assessment (SOFA) score [15], use of mechanical ventilation (MV), positive end-expiratory pressure (PEEP), use of renal replacement therapy, need for vasoactive drugs, and interpretability of the ultrasound images. All echocardiographic measurements were done with an Esaote MyLab 30 GOLD cardiovascular ultrasound system (Esaote, Geneva, Italy) equipped with a 3.5 MHz phased-array transducer. Measurements were obtained independently by three blinded intensivists that included a set of hemodynamic parameters with LVOTd, VTI, and HR. All ultrasound images obtained by the three intensivists were stored in digital format and analyzed independently by two blinded investigators to assess the interpretability of the images using a standardized rating scale [16]. Once a subject was enrolled, the three intensivists performed sequential measurements of PWD-CO. The PAC-CO was obtained after each echocardiographic measurement. The PWD-CO was calculated using the maximum value of three LVOTd measurements and the average of three VTI values [17]. The PWD-CO was calculated as follows: $${\text{PWD-CO }} = {\text{ Stroke volume }}\left( {\text{SV}} \right) \, \times {\text{ HR}},{\text{ where SV }} = \, \left[ {\left( { 3. 1 4 1 6} \right) \, \times \, \left( {{\text{LVOTd}}/ 2} \right)^{ 2} } \right] \, \times {\text{ VTI}} .$$ The LVOTd was measured from a parasternal long-axis view (Fig. 1). The distance from the inner edge to inner edge of the LVOT was measured in a line parallel to the aortic annulus from the base of the right aortic valve coronary cusp to the base of the non-coronary cusp. The VTI was measured by obtaining an apical 5-chamber view and then placing a pulsed-wave Doppler cursor in the LVOT below the aortic valve annulus (Fig. 2). We measured the VTI, at the same time, in the respiratory cycle, ideally at the end of expiration. Measurement of the LVOTd from a parasternal long-axis view Measurement of the LVOT VTI from an apical 5-chamber view The Doppler signal was traced using cardiac software to calculate the VTI, and an average of three measurements was used. The HR was calculated using the ultrasound cardiac software and not by physical examination or telemetry. The PAC-CO was performed using a 7-French balloon-tipped standard four-lumen PAC model 131HF7 (Edwards Lifesciences Corp, Irvine, CA, USA) connected to a cardiac output monitor LCD medical display-model MOLVL 150-05 (General Electrics, Milwaukee, Wisconsin). PAC-CO measurements were obtained by injecting 10 mL of cold 0.9% saline throughout the respiratory cycle. The CO was measured three times and the results were averaged [18]. All the PWD-CO and PAC-CO measurements were obtained within a maximum of 1 h. The intensivists obtaining the thermodilution results (PAC-CO) were blinded to the PWD-CO measurements and vice versa. First, a descriptive analysis was performed. Normal distribution of the study variables was confirmed using the Kolmogorov–Smirnov test. Discrete variables were expressed as counts and percentages, and continuous variables were expressed as means with standard deviations (SD) or as medians with interquartile ranges (25th–75th percentile). Differences between groups were assessed using a Chi-squared test or Fisher's exact test, and Student's t test or Mann–Whitney U test, as appropriate. A p < 0.05 was considered statistically significant. The measurement of PAC-CO was considered to be the gold standard measurement for comparison. PWD-CO measurements were compared to the PAC-CO measurements for each individual time-point. Comparisons between these measurements were performed by the linear technique described by Bland and Altman [19]. We defined a clinically acceptable level of agreement between the two techniques when the percentage of error was less than 30% as described by Critchley and Critchley [20]. This cut-off is based on an assumption that a new device destined to monitor CO should have a similar level of precision as the gold standard technique, which in this case is the PAC-CO [21]. The mean differences between the two techniques (bias), the standard deviation (SD) and precision and percentage of error (PE), together with the 95% limits of agreement (LOA) were determined for both techniques. PE for agreement between the two techniques was calculated using the following equation: $${\text{PE PAC-PWD }} = \, \surd \, \left[ {\left( {\text{precision PAC}} \right)^{ 2} + \, \left( {\text{precision PWD}} \right)^{ 2} } \right].$$ The coefficient of variation (CV) and coefficient of error (CE) were also calculated for both techniques and between them. The intra- and inter-observer variability was measured by the coefficient of intraclass correlation (CIC) and organized according to the Fleiss kappa scale (Fleiss index). A CIC greater > 0.6 was consider acceptable. Data were analyzed using the SPSS Statistics for windows version 15.0 (IBM corp. Armonk, NY, USA). A total of 42 critically ill patients were assessed for enrolment in this study. Among them, 14 patients (33.3%) were excluded due to inability to acquire a high-quality image from the parasternal long-axis view to measure LVOTd or apical 5-chamber view to measure VTI. An additional eight patients (19%) were excluded due to atrial fibrillation (n = 5), aortic valve disease (n = 2), or technical difficulties in obtaining the PAC-CO measurement (n = 1). Data were analyzed from 20 subjects [mean age 67 (± 14) years), 70% males]. Baseline characteristics of the study population are shown in Table 1. Briefly, the most common diagnosis for ICU admission was septic shock (45%). The majority of patients were receiving mechanical ventilation (90%) and vasopressor medications (80%). Table 1 Study population demographics and clinical characteristics Compared to included patients, the excluded patients had a faster heart rate and required higher norepinephrine doses. Variables associated with inability to acquire high-quality echocardiographic views were an abdominal wall dressing (p = 0.043) and high tidal volumes (p = 0.008) (Table 2). Table 2 Factors associated with inability to acquire echocardiographic views Data measurements PWD-CO was acquired successfully in 20 patients. To acquire the desired measurements with PWD-CO and PAC-CO, a mean of 54 (± 23) min elapsed to perform a complete examination, from setting up the ultrasound machine for the PWD-CO measurement to acquiring the PAC-CO measurement. For measurement of the PWD-CO alone, a mean of 12 (± 4) min elapsed. The mean LVOTd was 1.92 cm (± 0.13 cm) and the mean VTI was 20.85 cm (± 3.72 cm). The average PWD-CO was 5.22 L/min (± 1.17 L/min), which was less than the average PAC-CO of 6.26 L/min (± 1.96 L/min). Pearson correlation index demonstrated a reasonable correlation between PWD-CO and PAC-CO measurements (r = 0.78, p < 0.0001) (Fig. 3). To compare CO by both techniques, a Bland–Altman analysis was performed and showed a bias of 1.03 L/min (± 1.27 L/min) with 95% limits of agreement ranging from − 1.50 to 3.56 L/min (Fig. 4). Less difference was seen between both techniques in patients with reduced cardiac output. In those patients with CO < 6.5 L/min, a bias of 0.46 L/min (± 0.88 L/min) with 95% limits of agreement of − 1.29 to 2.22 L/min was found. Correlation of PAC-CO and PWD-CO (r = 0.78, p < 0.0001) Bland–Altman plots. a Difference in PAC-CO and PWD-CO in all patients, and b difference in PAC-CO and PWD-CO in patients with CO < 6.5 L/min The bias, precision, level of agreement, percentage of error, coefficient of variation, and coefficient of error are listed in Table 3. The mean PE between PWD-CO and PAC-CO was 17%. In one patient, the mean PE was higher than 30%. In this case, cardiac rate was normal with a high stroke volume and we could not explain the reason for this outlier. Table 3 Agreement between PWD-CO and PAC-CO Finally, we found an excellent intra-observer and a good inter-observer agreement between the LVOTd and VTI measurements using the Fleiss kappa scale. Detailed results are shown in Table 4. Table 4 Intra- and inter-observer variability In this study, we found an acceptable agreement of CO measured by CCE vs. PAC with thermodilution, and the inter- and intra-observer reliability was high. These findings suggest that CO can be accurately measured in critically ill patients by intensivists with the basic CCE training. However, it is important to recognize that high-quality transthoracic images to calculate CO could only be obtained in about half of eligible patients. Although studies since the 1980s have shown that PWD measurements can accurately determine CO [4,5,6,7,8,9], a few studies have compared PWD-CO vs. PAC-CO in non-selected, critically ill patients. A recently systematic review of cardiac output measurements by echocardiography vs. thermodilution [22] concluded that the two techniques are not interchangeable. Twenty-four studies of critically ill and non-critically ill patients were included and both transesophageal and transthoracic echocardiography were used in these studies. None of the studies assessed inter- and intra-observer variability. Important limitations of the studies in this systematic review were small sample sizes, heterogeneity, and inadequate statistical analyses. To our knowledge, one study that compared the use of PWD-CO vs. PAC-CO in critically ill patients found high accuracy and precision between the two techniques [23]. Although the design of this study is comparable to our study, the PWC-CO measurements were obtained by intensivists with extensive experience in CCE. Similar to previously published studies, our bias analysis showed a systematic underestimation of CO by PDW compared to thermodilution by PAC [24]. This discrepancy was more notable in patients with high cardiac outputs (Fig. 4), probably related to the influence of high flow velocities and turbulent flow over the PWD signal, variability of the VTI angle [25], physiologic fluctuations in stroke volume, and size of the aortic valve orifice [26]. Our study demonstrated that intensivists with basic CCE training can assess cardiac output in an unselected population of critically ill patients with an acceptable level of agreement between the PWD-CO and PAC-CO measurements. Although isolated CO values should be interpreted with caution, our findings indicate that PWD-CO measurements were accurate over a wide range of cardiac outputs, showing an even stronger correlation in patients with a cardiac output < 6.5 L/min, which can have important implications for the management of vasopressors and fluid therapy. Additionally, our study is one of the few studies that assessed the inter- and intra-observer variability, and reported the challenges of acquiring high-quality transthoracic images by intensivists with basic CCE training. The intra-observer agreement was excellent and inter-observer agreement was good for ultrasound measurements of LVOT diameter, VTI, and CO. The coefficients of intraclass correlation were acceptable and similar to values described in the literature [27], suggesting that serial measurements, even if performed by different observers with basic training, can be sufficiently reproducible in clinical practice. We also found a significant association between abdominal wall dressings and poor-quality images. Our study has several limitations. First, the total number of subjects from whom data was analyzed was small (n = 20). Approximately half of the patients were excluded due to difficulty in acquiring high-quality images. This limitation of our study is similar to the other studies [28, 29] where high-quality images were not acquired due to use of mechanical ventilation and high levels of PEEP. Furthermore, use of PACs for hemodynamic monitoring has been progressively decreasing in our intensive care unit given the availability of non-invasive methods to measure CO. Thus, use of a PAC was left to the discretion of the attending physician when another less invasive methods of monitoring CO could not be utilized. Another limitation of our study is the time required to acquire the CO measurements, which averaged close to an hour for a complete examination [mean 54 (± 23) min]. Although this amount of time would be impractical in clinical practice, it is important to note that several measurements were obtained to follow our research study protocol. Most important, the mean time to acquire only the PWD-CO was 12 min, which is realistic to perform in clinical practice. The time and accuracy of these measurements could potentially be improved if acquired by experienced intensivists or cardiac sonographers. Finally, limited experience of the intensivists in our study was likely an important factor that reduced the accuracy of the PWD-CO measurements. This limited experience is probably due in part to the fact that standards for CCE education currently vary by country, and there is no widely accepted consensus on the training of intensivists [30], despite the recommendations of professional societies to define competencies for basic and advanced training levels [10, 11]. Although an acceptable level of agreement was achieved between CO measured by CCE vs. PAC, the effect of individual or serial measurements of CO on clinical outcomes in critically ill patients is unknown. A recent study found a moderate level of agreement in the hemodynamic assessments performed using transpulmonary thermodilution (TPT) vs. CCE in ventilated patients with septic shock. However, there was no impact in mortality or lactate clearance [31]. Future studies should explore the impact of assessing CO by CCE on mortality and other important clinical outcomes. In conclusion, our findings demonstrate that intensivists with basic critical care echocardiography training can accurately and reliably measure CO in critically ill patients compared to gold standard measurements using a pulmonary artery catheter. However, an important limitation is the inability to obtain high-quality transthoracic images to calculate CO in approximately half of eligible patients. PAC: pulmonary artery catheter TTE: transthoracic echocardiography TEE: transesophageal echocardiography CCE: critical care echocardiography ICUs: intensive-care units pulsed-wave Doppler ultrasound LVOTd: left-ventricular outflow tract diameter VTI: left-ventricular outflow tract velocity time integral CVP: central venous pressure mean arterial blood pressure APACHE II: Acute Physiology and Chronic Health Evaluation II score SOFA: Sequential Organ-Failure Assessment score PEEP: positive end-expiratory pressure SV: LOA: limits of agreement PE: coefficient of variation CE: coefficient of error CIC: coefficient of intraclass correlation Connors AF Jr, Speroff T, Dawson NV (1996) The effectiveness of right heart catheterization in the initial care of critically ill patients. JAMA 276(11):889–897 Cecconi M, De Backer D, Antonelli M, Beale R, Bakker J, Hofer C, Jaeschke R, Mebazaa A, Pinsky MR, Teboul JL, Vincent JL, Rhodes A (2014) Consensus on circulatory shock and hemodynamic monitoring. Task force of the European Society of Intensive Care Medicine. Intensive Care Med 40(12):1795–1815 Ihlen H, Amlie JP, Dale J (1984) Determination of cardiac output by Doppler echocardiography. Br Heart J 51(1):54–60 McLean AS, Needham A, Stewart D, Parkin R (1997) Estimation of cardiac output by noninvasive echocardiographic techniques in the critically ill subject. Anaesth Intensive Care 25(3):250–254 Axler O, Megarbane B, Lentschener C (2003) Comparison of cardiac output measured with echocardiographic volumes and aortic Doppler methods during mechanical ventilation. Intensive Care Med 29(1):208–217 Evangelista A, Garcia-Dorado D, Del Castillo H (1995) Cardiac index quantification by Doppler ultrasound in patients without left ventricular outflow tract abnormalities. J Am Coll Cardiol 25(3):710–716 Schuster AH, Nanda NC (1984) Doppler echocardiographic measurement of cardiac output: comparison with a non-golden standard. Am J Cardiol 53(1):257–259 Mayer SA, Sherman D, Fink ME (1995) Noninvasive monitoring of cardiac output by Doppler echocardiography in patients treated with volume expansion after subarachnoid hemorrhage. Crit Care Med 23(9):1470–1474 Vignon P, Mentec H, Terre S (1994) Diagnostic accuracy and therapeutic impact of transthoracic and transesophageal echocardiography in mechanically ventilated patients in the ICU. Chest 106(6):1829–1834 Mayo PH, Beaulieu Y, Doelken P (2009) American College of Chest Physicians/La Société de Réanimation de Langue Française statement on competence in critical care ultrasonography. Chest 135(4):1050–1060 Expert Round Table on Ultrasound in ICU (2011) International expert statement on standards for critical care ultrasonography. Intensive Care Med 37(7):1077–1083 Orde S, Slama M, Hilton A, Yastreboy K, McLean A (2017) Pearls and pitfalls in comprehensive critical care echocardiography. Crit Care 21(1):279 Price S, Via G, Sloth E, Guarracino F, Breitkreutz R, Catena E, Talmor D, World Interactive Network Focused On Critical UltraSound ECHO-ICU Group (2008) Echocardiography practice, training and accreditation in the intensive care: document for the World Interactive Network Focused on Critical Ultrasound (WINFOCUS). Cardiovasc Ultrasound 6:49 Knaus WA, Draper EA, Wagner DP, Zimmerman JE (1985) APACHE II: a severity of disease classification system. Crit Care Med 13(10):818–829 Vincent JL, de Mendonca A, Cantraine F (1998) Use of the SOFA score to assess the incidence of organ dysfunction/failure in intensive care units: results of a multicenter, prospective study. Crit Care Med 26(11):1793–1800 Dinh VA, Ko HS, Rao R, Bansal RC, Smith DD, Kim TE, Nguyen HB (2012) Measuring cardiac index with a focused cardiac ultrasound examination in the ED. Am J Emerg Med 30(9):1845–1851 Armstrong W, Ryan T (2000) Feigenbaum's echocardiography, 7th edn. Lippincott Williams & Wilkins, Philadelphia Jansen JRC, Schreuder JJ, Bogaard JM, Van Rooyen W, Versprille A (1981) Thermodilution technique for measurement of cardiac output during artificial ventilation. J Appl Physiol Respir Environ Exerc Physiol 51(3):584–591 Bland JM, Altman DG (1986) Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1(8476):307–310 Critchley LA, Critchley JA (1999) A meta-analysis of studies using bias and precision statistics to compare cardiac output measurement techniques. J Clin Monit Comput 15(2):85–91 Taylor RW, Clavin JE, Matuschat GM (1997) Pulmonary artery catheter consensus conference: the first step. Crit Care Med 25(12):910–925 Wetterslev M, Møller-Sørensen H, Johansen RR, Perner A (2016) Systematic review of cardiac output measurements by echocardiography vs. thermodilution: the techniques are not interchangeable. Intensive Care Med 42(8):1223–1233 Mercado P, Maize J, Beyls C, Titeca-Beauport D, Joris M, Kontar L, Riviere A, Bonef O, Soupison T, Tribouilloy C, de Cagny B, Slama M (2017) Transthoracic echocardiography: an accurate and precise method for estimating cardiac output in the critically ill patient. Crit Care 21(1):136 Valtier B, Cholley BP, Belot JP (1998) Noninvasive monitoring of cardiac output in critically ill patients using transesophageal Doppler. Am J Respir Crit Care Med 158:77–83 Espersen K, Jensen EW, Rosenborg D (1995) Comparison of cardiac output measurement techniques: thermodilution, Doppler, CO2-rebreathing and the direct Fick method. Acta Anaesthesiol Scand 39(2):245–251 Fisher DC, Sahn DJ, Friedman MJ, Larson D, Valdes-Cruz LM, Horowitz S, Goldberg SJ, Allen HD (1983) The mitral valve orifice method for non invasive two-dimensional echo Doppler determinations of cardiac output. Circulation 67(4):872–877 Prieto L, Lamarca R, Casado A (1998) Assessment of the reliability of clinical findings: the intraclass correlation coefficient. Med Clín 110(4):142–145 Boussuges A, Blanc P, Molenat F (2002) Evaluation of left ventricular filling pressure by transthoracic Doppler echocardiography in the intensive care unit. Crit Care Med 30(2):362–367 Nagueh SF, Kopelen HA, Zoghbi WA (1995) Feasibility and accuracy of Doppler echocardiographic estimation of pulmonary artery occlusive pressure in the intensive care unit. Am J Cardiol 75(17):1256–1262 Labbé V, Ederhy S, Pasquet B, Miguel-Montanes R, Rafat C, Hajage D, Gaudry S, Dreyfuss D, Cohen A, Fartoukh M, Ricard JD (2016) Can we improve transthoracic echocardiography training in non-cardiologist residents? Experience of two training programs in the intensive care unit. Ann Intensive Care 6(1):44 Vignon P, Begot E, Mari A, Silva S, Chimot L, Delour P, Vargas F, Filloux B, Vandroux D, Jabot J, François B, Pichon N, Clavel M, Levy B, Slama M, Riupoulene B (2018) Hemodynamic assessment of patients with septic shock using transpulmonary thermodilution and critical care echocardiography: a comparative study. Chest 153(1):55–64 All authors contributed to study conception and design, data analysis, and drafting the manuscript. All authors read and approved the final manuscript. The authors added the database of the present study as Additional file 1. It was obtained a consent for participation and for publication. The study was approved by the Ethics and Clinical Research Committee. It was obtained an informed consent for all patients. There was no funding to support this study. Critical Care Department, Joan XXIII-University Hospital, Mallafre Guasch 4, 43007, Tarragona, Spain Christian Villavicencio , Julen Leache , Iban Oliva , Alejandro Rodriguez & María Bodí Critical Care Department, Hospital del Mar-Research Group in Critical Illness (GREPAC), Institut Hospital del Mar d'investigacions Mèdiques (IMIM), Barcelona, Spain Judith Marin Division of Pulmonary & Critical Care Medicine, University of Texas Health San Antonio, San Antonio, TX, USA Nilam J. Soni Division of General & Hospital Medicine, University of Texas Health San Antonio, San Antonio, TX, USA Section of Hospital Medicine, South Texas Veterans Health Care System, San Antonio, TX, USA Search for Christian Villavicencio in: Search for Julen Leache in: Search for Judith Marin in: Search for Iban Oliva in: Search for Alejandro Rodriguez in: Search for María Bodí in: Search for Nilam J. Soni in: Correspondence to Christian Villavicencio. Additional file 13089_2019_120_MOESM1_ESM.xlsx Additional file 1. PWD-CO vs. PAC-CO Data. Villavicencio, C., Leache, J., Marin, J. et al. Basic critical care echocardiography training of intensivists allows reproducible and reliable measurements of cardiac output. Ultrasound J 11, 5 (2019) doi:10.1186/s13089-019-0120-0 Received: 10 September 2018 Pulsed-wave Doppler
CommonCrawl
High-speed Curve25519 on 8-bit, 16-bit, and 32-bit microcontrollers Michael Düll1, Björn Haase2, Gesine Hinterwälder1, Michael Hutter3, Christof Paar1, Ana Helena Sánchez4 & Peter Schwabe4 Designs, Codes and Cryptography volume 77, pages 493–514 (2015)Cite this article This paper presents new speed records for 128-bit secure elliptic-curve Diffie–Hellman key-exchange software on three different popular microcontroller architectures. We consider a 255-bit curve proposed by Bernstein known as Curve25519, which has also been adopted by the IETF. We optimize the X25519 key-exchange protocol proposed by Bernstein in 2006 for AVR ATmega 8-bit microcontrollers, MSP430X 16-bit microcontrollers, and for ARM Cortex-M0 32-bit microcontrollers. Our software for the AVR takes only 13,900,397 cycles for the computation of a Diffie–Hellman shared secret, and is the first to perform this computation in less than a second if clocked at 16 MHz for a security level of 128 bits. Our MSP430X software computes a shared secret in 5,301,792 cycles on MSP430X microcontrollers that have a 32-bit hardware multiplier and in 7,933,296 cycles on MSP430X microcontrollers that have a 16-bit multiplier. It thus outperforms previous constant-time ECDH software at the 128-bit security level on the MSP430X by more than a factor of 1.2 and 1.15, respectively. Our implementation on the Cortex-M0 runs in only 3,589,850 cycles and outperforms previous 128-bit secure ECDH software by a factor of 3. A large and growing share of the world's CPU market is formed by embedded microcontrollers. A surprisingly large number of embedded systems require security, e.g., electronic passports, smartphones, car-to-car communication and industrial control units. The continuously growing Internet of Things will only add to this development. It is of great interest to provide efficient cryptographic primitives for embedded CPUs, since virtually every security solution is based on crypto algorithms. Whereas symmetric algorithms are comparably efficient and some embedded microcontrollers even offer hardware support for them [3], asymmetric cryptography is notoriously computational intensive. Since the invention of elliptic-curve cryptography (ECC) in 1985, independently by Koblitz [26] and Miller [31], it has become the method of choice for many applications, especially in the embedded domain. Compared to schemes that are based on the hardness of integer factoring, most prominently RSA, and schemes based on the hardness of the discrete logarithm in the multiplicative group \(\mathbb {Z}_n^*\), like the classical Diffie–Hellman key exchange or DSA, ECC offers significantly shorter public keys, faster computation times for most operations, and an impressive security record. For suitably chosen elliptic curves, the best attacks known today still have the same complexity as the best attacks known in 1985. Over the last one and half decade or so, various elliptic curves have been standardized for use in cryptographic protocols such as TLS. The most widely used standard for ECC are the NIST curves proposed by NSA's Jerry Solinas and standardized in [33, Appendix D]. Various other curves have been proposed and standardized, for example the FRP256v1 curve by the French ANSSI [1], the Brainpool curves by the German BSI [30], or the SM2 curves proposed by the Chinese government [36]. It is known for quite a while that all of these standardized curves are not optimal from a performance perspective and that special cases in the group law complicate implementations that are at the same time correct, secure, and efficient. These disadvantages together with some concerns about how these curves were constructed—see, for example [10, 37]—recently lead to increased interest in reconsidering the choice of elliptic curves for cryptography. As a consequence, in 2015 the IETF adopted two next-generation curves as draft internet standard for usage with TLS [25]. One of the promising next-generation elliptic curves now also adopted by the IETF is Curve25519. Curve25519 is already in use in various applications today and was originally proposed by Bernstein in 2006 [5]. Bernstein uses the Montgomery form of this curve for efficient, secure, and easy-to-implement elliptic-curve Diffie–Hellman key exchange. Originally, the name "Curve25519" referred to this key-exchange protocol, but Bernstein recently suggested to rename the scheme to X25519 and to use the name Curve25519 for the underlying elliptic curve [6]. We will adopt this new notation in this paper. Several works describe the excellent performance of this key-agreement scheme on large desktop and server processors, for example, the Intel Pentium M [5], the Cell Broadband Engine [13], ARM Cortex-A8 with NEON [7], or Intel Nehalem/Westmere [8, 9]. Contributions of this paper This paper presents implementation techniques of X25519 for three different, widely used embedded microcontrollers. All implementations are optimized for high speed, while executing in constant time, and they set new speed records for constant-time variable-base-point scalar multiplication at the 128-bit security level on the respective architectures. To some extent, the results presented here are based on earlier results by some of the authors. However, this paper does not merely collect those previous results, but significantly improves performance. Specifically, the software for the AVR ATmega family of microcontrollers presented in this paper takes only 13,900,397 cycles and is thus more than a factor of 1.6 faster than the X25519 software described by Hutter and Schwabe [23]. The X25519 implementation for MSP430Xs with 32-bit multiplier presented in this paper takes only 5,301,792 cycles and is thus more than a factor of 1.2 faster, whereas the implementation for MSP430Xs with 16-bit multiplier presented in this paper takes 7,933,296 cycles and is more than a factor of 1.15 faster than the software presented by Hinterwälder et al. [21]. Furthermore, this paper is the first to present a X25519 implementation optimized for the very widely used ARM Cortex-M0 architecture. The implementation requires only 3,589,850 cycles, which is a factor of 3 faster than the scalar multiplication on the NIST P-256 curve described by Wenger et al. [45]. A note on side-channel protection All the software presented in this paper avoids secret-data-dependent branches and secretly indexed memory access and is thus inherently protected against timing attacks. Protection against power-analysis (and EM-analysis) attacks is more complex. For example, the implementation of the elliptic-curve scalar multiplication by Wenger et al. [45] includes an initial randomization of the projective representation (and basic protection against fault-injection attacks). The authors claim that their software is "secure against (most) side-channel attacks". Under the assumption that good randomness is readily available (which is not always the case in embedded systems), projective randomization indeed protects against first-order DPA attacks and the recently proposed online-template attacks [4]. However, it does not protect against horizontal attacks [12] or higher-order DPA attacks. DPA attacks are mainly an issue if X25519 is used for static Diffie–Hellman key exchange with long-term keys; they are not an issue at all for ephemeral Diffie–Hellman without key re-use. Adding projective randomization would be easy (assuming a reliable source of randomness) and the cost would be negligible, but we believe that serious protection against side-channel attacks requires more investigation, which is beyond the scope of this paper. Availability of software We placed all the software described in this paper into the public domain. The software for AVR ATmega is available at http://munacl.cryptojedi.org/curve25519-atmega.shtml; the software for TI MSP430 is available at http://munacl.cryptojedi.org/curve25519-msp430.shtml; and the software for ARM Cortex M0 is available at http://munacl.cryptojedi.org/curve25519-cortexm0.shtml. Organization of this paper Section 2 reviews the X25519 elliptic-curve Diffie–Hellman key exchange protocol. Section 3 describes our implementation for AVR ATmega, Sect. 4 describes our implementation for MSP430X, and Sect. 5 describes our implementation for Cortex-M0. Each of these three sections first briefly introduces the architecture, then gives details of the implementation of the two most expensive operations, namely field multiplication and squaring, and then concludes with remarks on other operations and the full X25519 implementation. Finally, Sect. 6 presents our results and compares them to previous results. Review of X25519 X25519 elliptic-curve Diffie–Hellman key-exchange was introduced in 2006 by Bernstein [5]. It is based on arithmetic on the Montgomery curve Curve25519 with equation $$\begin{aligned} E: y^2 = x^3 + 486662 x^2 +x \end{aligned}$$ defined over the field \({\mathbb {F}_{2^{255}-19}}\). Computation of a shared secret, given a 32-byte public key and a 32-byte secret key, proceeds as follows: The 32-byte public key is the little-endian encoding of the x-coordinate of a point P on the curve; the 32-byte secret key is the little-endian encoding of a 256-bit scalar s. The most significant bit of this scalar is set to 0, the second-most significant bit of the scalar is set to 1, and the 3 least significant bits of the scalar are set to 0. The 32-byte shared secret is the little-endian encoding of the x-coordinate of [s]P. Computation of a Diffie–Hellman key pair uses the same computation, except that the public key is replaced by the fixed value 9, which is the x-coordinate of the chosen base point of the elliptic curve group. In all previous implementations of X25519, and also in our implementations, the x-coordinate of [s]P is computed by using the efficient x-coordinate-only formulas for differential addition and doubling introduced by Montgomery [32]. More specifically, the computation uses a sequence of 255 so-called "ladder steps"; each ladder step performs one differential addition and one doubling. Each ladder step is followed by a conditional swap of two pairs of coordinates. The whole computation is typically called Montgomery ladder; a pseudo-code description of the Montgomery ladder is given in Algorithm 1. The cswap function in that algorithms swaps its first two arguments \(X_1\) and \(X_2\) if its third argument \(c=1\). This could easily be achieved through an if-statement, but all of our implementations instead use bit-logical operations for the conditional swap to eliminate a possible timing side-channel. In all our implementations we achieve this by computing a temporary value \(t = (X_1 \oplus X_2) \times c\) and further executing an XOR of this result with the original values \(X_1\) and \(X_2\), i.e. \(X_1 = X_1 \oplus t\) and \(X_2 = X_2 \oplus t\). For the ladder-step computation we use formulas that minimize the number of temporary (stack) variables without sacrificing performance. Our implementations need stack space for only two temporary field elements. Algorithm 2 presents a pseudo-code description of the ladder step with these formulas, where a24 denotes the constant \((486662+2)/4 = 121666\). Note that each ladder step takes 5 multiplications, 4 squarings, 1 multiplication by 121666, and a few additions and subtractions in the finite field \({\mathbb {F}_{2^{255}-19}}\). At the end of the Montgomery ladder, the result x is obtained in projective representation, i.e., as a fraction \(x = X/Z\). X25519 uses one inversion and one multiplication to obtain the affine representation. In most (probably all) previous implementations, and also in our implementations, the inversion uses a sequence of 254 squarings and 11 multiplications to raise Z to the power of \(2^{255}-21\). The total computational cost of X25519 scalar multiplication in terms of multiplications (\(\mathbf {M}\)) and squarings (\(\mathbf {S}\)) is thus \(255\cdot (5\,\mathbf {M} + 4\,\mathbf {S})+254\,\mathbf {S}+12\,\mathbf {M} = 1287\,\mathbf {M}+1274\,\mathbf {S}\). Implementation on AVR ATmega The AVR ATmega family of microcontrollers The AVR ATmega is a family of 8-bit microcontrollers. The architecture features a register file with 32 8-bit registers named \(\mathtt{R0},\ldots ,\mathtt{R31}\). Some of these registers are special: The register pair (R26,R27) is aliased as X, the register pair (R28,R29) is aliased as Y, and the register pair (R30,R31) is aliased as Z. These register pairs are the only ones that can be used as address registers for load and store instructions. The register pair (R0,R1) is special because it always holds the 16-bit result of an \(8{\times }8\)-bit multiplication. The instruction set is a typical 8-bit RISC instruction set. The most important arithmetic instructions for big-integer arithmetic—and thus also large-characteristic finite-field arithmetic and elliptic-curve arithmetic—are 1-cycle addition (ADD) and addition-with-carry (ADC) instructions, 1-cycle subtraction (SUB) and subtraction-with-borrow (SBC) instructions, and the 2-cycle unsigned-multiply (MUL) instruction. Furthermore, our squaring routine (see below) makes use of 1-cycle left-shift (LSL) and left-rotate (ROL) instructions. Both instructions shift their argument to the left by one bit and both instructions set the carry flag if the most-significant bit was set before the shift. The difference is that LSL sets the least-significant bit of the result to zero, whereas ROL sets it to the value of the carry flag. The AVR instruction set offers multiple instructions for memory access. All these instructions take 2 cycles. The LD instruction loads a value from memory to an internal general-purpose register. The ST instruction stores a value from register to memory. An important feature of the AVR is the support of pre-decrement and post-increment addressing modes that are available for the X, Y, and Z registers. For the registers Y and Z there also exist a displacement addressing mode where data in memory can be indirectly addressed by a fixed offset. This has the advantage that only a 16-bit base address needs to be stored in registers while the addressing of operands is done by indirect displacement and without changing the base-address value. We applied addressing with indirect displacement as much as possible in our code to increase efficiency. AVR ATmega microcontrollers come in various different memory configurations. For example, our benchmarking platform features an ATmega2560 with 256 KB of ROM and 8 KB of RAM. Other common configurations are the ATmega128 with 128 KB of ROM and 4 KB of RAM and the ATmega328 with 32 KB of ROM and 2 KB of RAM. All cycle counts for arithmetic operations reported in this section have been obtained from a cycle-accurate simulation (using the simulator of the Atmel AVR Studio). In our AVR implementation we use an unsigned radix-\(2^8\) representation for field elements. An element f in \(\mathbb {F}_{2^{255}-19}\) is thus represented as \(f=\sum _{i=0}^{31} f_i2^{8i} \,\hat{=}\,(f_0, f_1, \ldots f_{31})\) with \(f_i \in \{0, \ldots , 255\}\). For fast 256-bit-integer multiplication on the AVR we use the recently proposed highly optimized 3-level Karatsuba multiplication routine by Hutter and Schwabe [24]. More specifically, we use the branch-free variant of their software, which is slightly slower than the "branched" variant but allows easier verification of constant-time behavior. This branch-free subtractive Karatsuba routine takes 4961 cycles without function-call overhead and thus outperforms previous results presented by Hutter and Wenger [22], and by Seo and Kim [38, 39] by more than \(18\,\%\). Not only is the Karatsuba multiplier from [24] faster than all previous work, it is also smaller than previous fully unrolled speed-optimized multiplication routines. For some applications, the size of 7616 bytes might still be considered excessive so we investigated what the time-area tradeoff is for not fully unrolling and inlining Karatsuba. A multiplier that uses 3 function calls to a \(128{\times }128\)-bit multiplication routine instead of fully inlining those half-size multiplication takes 5064 cycles and has a size of only 3366 bytes. Note that a single 2-level \(128{\times }128\)-bit Karatsuba multiplication takes 1369 cycles, therefore 957 cycles are due to the higher-level Karatsuba overhead. Because of the better speed/size trade-off, we therefore decided to integrate the latter multiplication method needing 103 cycles in addition but saves almost 56 % of code size. Section 6 reports results for X25519 for both an implementation with the faster multiplier from [24] and the smaller and slightly slower multiplier. The details of the size-reduced Karatsuba multiplication are as follows. Basically, we split the \(256\times 256\)-bit multiplication into three \(128\times 128\)-bit multiplications. We follow the notation of [24] and denote the results of these three smaller multiplications with L for the low part, H for the high part, and M for the middle part. Each of these multiplications is implemented as a 2-level refined Karatsuba multiplication and is computed via a function call named MUL128. This function expects the operands in the registers X and Y and the address of the result in Z. After the low-word multiplication L, we increment the operand and result-address pointers and perform the high-word multiplication H by a second call to MUL128. Note that here we do not merge the refined Karatsuba addition of the upper half of L into the computation of H as described in [24] because we would need additional conditions in MUL128 which we avoid in general. Instead, we accumulate the higher words of L right after the computation of H. This requires the additional loading of all operands and the storing of the accumulated result back to memory—but this can be done in the higher-level Karatsuba implementation which makes our code more flexible and smaller in size. Finally, we prepare the input operands for the middle-part multiplication M by a constant-time calculation of the absolute differences and a conditional negation. Squaring We implemented a dedicated squaring function to improve speed of X25519. For squaring, we also made use of Karatsuba's technique but only use 2 levels and make use of some simplifications that are applicable in general. For example, in squaring many cross-product terms are equal so that the computation of those terms needs to be performed only once. These terms can then be simply shifted to the left in order to get doubled. Furthermore, it becomes obvious that by calculating the absolute difference of the input for the middle-part Karatsuba squaring M is always positive. Thus also no conditional negation is required. For squaring, we hence do not need to distinguish between a "branched" and a "branch-free" variant as opposed to the multiplication proposed in [24]. Similar to multiplication, we implemented a squaring function named SQR128, which is then called in a higher-level 256-bit squaring implementation. The 128-bit squaring operation needs 872 cycles. Again we use two versions of squaring, one with function calls and one fully inlined version. The fully inlined version needs a total of 3324 cycles. Besides 256-bit multiplication and squaring, we implemented a separate modular reduction function as well as 256-bit modular addition and subtraction. All those implementations are implemented in assembly to obtain best performance. During scalar multiplication in X25519, we decided to reduce all elements modulo \(2^{256}-38\) and perform a "freezing" operation at the end of X25519 to finally reduce modulo \(2^{255}-19\). This has the advantage that modular reduction is simplified throughout the entire computation because the intermediate results need not be fully reduced but can be almost reduced which saves additional costly reduction loops. In total, modular addition and subtraction need 592 cycles. Modular reduction needs 780 cycles. The Montgomery arithmetic on Curve25519 requires a multiplication with the curve parameter \(a24=121666\) (see Algorithm 2 for the usage in the Montgomery-ladder step). We specialized this multiplication in a dedicated function called fe25519_mul121666. It makes use of the fact that the constant has 17 bits; multiplying by this constant needs only 2 multiplication instructions and several additions per input byte. The multiplication of a 256-bit integer by 121666 needs 695 cycles. All these cycle counts are for the fully speed optimized version of our software, which unrolls all loops. Our smaller software for X25519 uses (partially) rolled loops which take a few extra cycles. Implementation on MSP430X This section describes our implementation of X25519 on MSP430X microcontrollers, which is based on and improves the software presented in [21]. We implemented X25519 for MSP430X devices that feature a 16-bit hardware multiplier as well as for those that feature a 32-bit hardware multiplier. We present execution results measured on an MSP430FR5969 [41], which has an MSP430X CPU, 64 KB of non-volatile memory (FRAM), 2 kB SRAM and a 32-bit memory-mapped hardware multiplier. The result of a \(16 \times 16\)-bit multiplication is available in 3 cycles on both types of MSP430X devices, those that have a 32-bit hardware multiplier as well as those that have a 16-bit hardware multiplier (cf. [41, 42]). Thus, our measurement results can be generalized to other microcontrollers from the MSP430X family. All cycle counts presented in this section were obtained when executing the code on a MSP-EXP430FR5969 Launchpad development board and measuring the execution time using the debugging functionality of the IAR Embedded Workbench IDE. The MSP430X The MSP430X has a 16-bit RISC CPU with 27 core instructions and 24 emulated instructions. The CPU has 16 16-bit registers. Of those, only R4 to R15 are freely usable working registers, and R0 to R3 are special-purpose registers (program counter, stack pointer, status register, and constant generator). All instructions execute in one cycle, if they operate on contents that are stored in CPU registers. However, the overall execution time for an instruction depends on the instruction format and addressing mode. The CPU features 7 addressing modes. While indirect auto-increment mode leads to a shorter instruction execution time compared to indexed mode, only indexed mode can be used to store results in RAM. We consider MSP430X microcontrollers, which feature a memory-mapped hardware multiplier that works in parallel to the CPU. Four types of multiplications, namely signed and unsigned multiply as well as signed and unsigned multiply-and-accumulate are supported. The multiplier registers have to be loaded with CPU instructions. The hardware multiplier stores the result in two (in case of 16-bit multipliers) or four (in case of 32-bit multipliers) 16-bit registers. Further a SUMEXT register indicates for the multiply-and-accumulate instruction, whether accumulation has produced a carry bit. However, it is not possible to accumulate carries in SUMEXT. The time required for the execution of a multiplication is determined by the time that it takes to load operands to and store results from the peripheral multiplier registers. The MSP430FR5969 (the target under consideration) belongs to a new MSP430X series featuring FRAM technology for non-volatile memory. This technology has two benefits compared to flash memory. It leads to a reduced power consumption during memory writes and further increases the number of possible write operations. However, as a drawback, while the maximum operating frequency of the MSP430FR5969 is 16 MHz, the FRAM can only be accessed at 8 MHz. Hence, wait cycles have to be introduced when operating the MSP430FR5969 at 16 MHz. For all cycle counts that we present in this section we assume a core clock frequency of 8 MHz. Increasing this frequency on the MSP430FR5969 would incur a penalty resulting from those introduced wait cycles. Note, that this is not the case for MSP430X devices that use flash technology for non-volatile memory. In our MSP430X implementation we use an unsigned radix-\(2^{16}\) representation for field elements. An element f in \(\mathbb {F}_{2^{255}-19}\) is thus represented as \(f=\sum _{i=0}^{15} f_i2^{16i} \,\hat{=}\,(f_0, f_1, \ldots f_{15})\) with \(f_i \in \{0, \ldots , 2^{16}-1\}\). In order to be conform with other implementations of X25519, we consider inputs and outputs to and from the scalar multiplication on Curve25519 to be 32-byte arrays. Thus conversions to and from the used representation have to be executed at the beginning and the end of the scalar multiplication. As reduction modulo \(2^{255}-19\) requires bit shifts in the chosen representation of field elements, we reduce intermediate results modulo \(2^{256}-38\) during the entire execution of the scalar multiplication and only reduce the final result modulo \(2^{255}-19\). Hinterwälder, Moradi, Hutter, Schwabe, and Paar presented and compared implementations of various multiplication techniques on the MSP430X architecture in [21]. They considered the carry-save, operand-caching and constant-time Karatsuba multiplication, for which they used the operand-caching technique for the computation of intermediate results. Among those implementations, the Karatsuba implementation performed best. To the best of the authors knowledge, the fastest previously reported result for 256-bit multiplication on MSP430X devices was presented by Gouvêa et al. [18]. In their work the authors have used the product-scanning technique for the multi-precision multiplication. We implemented and compared the product-scanning multiplication and the constant-time Karatsuba multiplication, and this time used the product-scanning technique for the computation of intermediate results of the Karatsuba implementation. It turns out that on devices that have a 16-bit hardware multiplier, the constant-time Karatsuba multiplication performs best. On devices that have a 32-bit hardware multiplier the product-scanning technique performs better than constant-time Karatsuba, as it makes best use of the 32-bit multiply-and-accumulate unit of the memory-mapped hardware multiplier. We thus use constant-time Karatsuba in our implementation of X25519 on MSP430X microcontrollers that have a 16-bit hardware multiplier and the product-scanning technique for our X25519 implementation on MSP430Xs that have a 32-bit hardware multiplier. In our product-scanning multiplication implementation, where \(h=f \times g \mod 2^{256}-38\) is computed, we first compute the coefficients of the double-sized array, which results from multiplying f with g and then reduce this result modulo \(2^{256}-38\). We only have 7 general-purpose registers available to store input operands during the multiplication operation. Hence, we cannot store all input operands in working registers, but we keep as many operands in them as possible. For the computation of a coefficient of the double-sized array, which results from multiplying f by g, one has to access the contents of f in incrementing and g in decrementing order, e.g. the coefficient \(h_2\) is computed as \(h_2 = f_0 g_2 + f_1 g_1 + f_2 g_0\). As there is no indirect auto-decrement addressing mode available on the MSP430X microcontroller, we put the contents of g on the stack in reverse order at the beginning of the multiplication, which allows us to access g using indirect auto-increment addressing mode for the remaining part of the multiplication. Including function-call and reduction overhead, our 32-bit product-scanning multiplication implementation executes in 2079 cycles on the MSP430FR5969. Without function call and modular reduction, it executes in 1693 cycles. For MSP430X microcontrollers that have a 16-bit hardware multiplier we implemented the constant-time one-level Karatsuba multiplication (refer to Sect. 3). We use the product-scanning technique to compute the three intermediate results L, H and M. For the computation of L, H and M we have seven working registers available to store input operands. Hence, we can store almost the full input that is accessed in decrementing order in working registers and access the eighth required operand of it using indirect addressing mode. Again we first compute the double-sized array resulting from the multiplication of f and h and then reduce this result modulo \(2^{256}-38\). Our modular multiplication implementation dedicated for devices that have a 16-bit hardware multiplier executes in 3193 cycles including function call and modular reduction, and in 2718 cycles excluding those. In order to compute \(h=f^2 \mod 2^{256}-38\), we first compute a double-sized array resulting from squaring f and then reduce this result modulo \(2^{256}-38\). Similar to our multiplication implementation, we use the product-scanning technique for our implementation targeting devices that have a 32-bit hardware multiplier. We again store the input f on the stack in reverse order, allowing us to use indirect auto-increment addressing mode to access elements of f in decrementing order. As mentioned in Sect. 3, many multiplications of cross-product terms occur twice during the execution of the squaring operation. These do not have to be computed multiple times, but can be accounted for by multiplying an intermediate result by two, i.e. shifting it to the left by one bit. As shift operations on the result registers of the memory-mapped hardware multiplier are expensive, we move results of a multiplication back to CPU registers before executing this shift operation. Including function call and modular reduction overhead our squaring implementation executes in 1563 cycles on MSP430X microcontrollers that have a 32-bit hardware multiplier. Without reduction and function call this number decreases to 1171 cycles. Our squaring implementation for MSP430X microcontrollers that have a 16-bit hardware multiplier follows the constant-time Karatsuba approach, where intermediate results are computed using the product-scanning technique. This function executes in 2426 cycles including function call and reduction overhead and in 1935 cycles without. We implemented all finite-field arithmetic in assembly language and all curve arithmetic as well as the conversion to and from the internal representation in C. The x-coordinate-only doubling formula requires a multiplication with the constant 121666. One peculiarity of the MSP430 hardware multiplier greatly improves the performance of the computation of \(h=f \cdot 121666 \mod 2^{256}-38\), which is that contents of the hardware multiplier's MAC registers do not have to be loaded again, in case the processed operands do not change. In case of having a 32-bit hardware multiplier we proceed as follows: The number 121666 can be written as \(1 \cdot 2^{16} + 56130\). We store the value 1 in MAC32H and 56130 in MAC32L and then during each iteration load two consecutive coefficients of the input array f, i.e. \(f_i\) and \(f_{i+1}\) to OP2L and OP2H for the computation of two coefficients of the resulting array namely \(h_i\) and \(h_{i+1}\). The array that results from computing \(f^2\) is only two elements longer than the input array, which we reduce as the next step. Using this method, the multiplication with 121666 executes in 352 cycles on MSP430s that have a 32-bit hardware multiplier, including function call and reduction. For the 16-bit hardware multiplier version, we follow a slightly different approach. As we cannot store the full number 121666 in the input register of the hardware multiplier, we proceed as follows: To compute \(h=f \cdot 121666 \mod 2^{256}-38\) we store the value 56130 in the hardware-multiplier register MAC. We then compute each \(h_i\) as \(h_i = f_i \cdot 56130 + f_{i-1}\) for \(i \in [1 \dots 15]\) such that we add the \((i-1)\)th input coefficient to the multiplier's result registers RESLO and RESHI. This step takes care of the multiplication with \(1 \cdot 2^{16}\) for the \((i-1)\)th input coefficient. We further load the ith input coefficient to the register OP2, thus executing the multiply-and-accumulate instruction to compute the ith coefficient of the result. Special care has to be taken with the coefficient \(h_0\), where \(h_0 = f_0 \cdot 56130 + 38 \cdot f_{15}\). The method executes in 512 cycles including function call and reduction overhead. The reduction of a double-sized array modulo \(2^{256}-38\) is implemented in a similar fashion. We store the value 38 in the MAC-register of the hardware multiplier. We then add the ith coefficient of the double-sized input to the result registers of the hardware multiplier and load the \((i+16)\)th coefficient to the OP2-register. In the 32-bit version of this reduction implementation the only difference is that two consecutive coefficients can be processed in each iteration, i.e. the ith and \((i+1)\)th coefficients are added to the result registers and and the \((i+16)\)th and \((i+17)\)th coefficient are loaded to the OP2-registers. The modular addition \(h=f+g \mod 2^{256}-38\), which executes in 186 cycles on the MSP430, first adds the two most significant words of f and g. It then extracts the carry and the most significant bit of this result and multiplies those with 19. This is added to the least significant word of f. All other coefficients of f and g are added with carry to each other. The carry resulting from the addition of the second most significant words of f and g is added to the sum that was computed first. For the computation of \(h=f-g\), we first subtract g with borrow from f. If the result of the subtraction of the most significant words produces a negative result, the carry flag is cleared, while, if it produces a positive result the carry flag is set. We add this carry flag to a register tmp that was set to 0xffff before, resulting in the contents of tmp to be 0xffff in case of a negative result and 0 in case of a positive result of the subtraction. We AND tmp with 38, subtract this from the lowest resulting coefficient and ripple the borrow through. Again a possible resulting negative result of this procedure is reduced using the same method, minus the rippling of the borrow. This modular subtraction executes in the same time as the modular addition, i.e. in 199 cycles including function-call overhead. Implementation on ARM Cortex-M0 The ARM Cortex M0 The ARM Cortex M0 and Cortex M0\(+\) cores (M0) are the smallest members of ARM's recent Cortex-M series, targeting low-cost and low-power embedded devices. The M0 implements a load-store architecture. The register file consists of 16 registers \(\mathtt{r0},\ldots ,\mathtt{r15}\), including 3 special-purpose registers for the program counter (pc) in r15, the return addresses (lr) in r14, and the stack pointer (sp) in r13. Unlike its larger brothers from the ARM Cortex M series, the M0 encodes arithmetic and logic instructions exclusively in 16 bits. This 16-bit instruction encoding results in constraints with respect to register addressing. As a result, the eight lower registers \(\mathtt{r0},\ldots ,\mathtt{r7}\) can be used much more flexibly than the upper registers \(\mathtt{r8}_,\ldots ,\mathtt{r14}\). More specifically, only the lower registers \(\mathtt{r0},\ldots ,\mathtt{r7}\) may be used for pointer-based memory accesses, as destination of a load or source of a store, and for holding memory-address information. Also almost all arithmetic and logic instructions like addition and subtraction only accept lower registers as operands and results. The upper registers are mainly useful as fast temporary storage, i.e., in register-to-register-move instructions. The M0 core supports a multiplication instruction which receives two 32-bit operands and produces a 32-bit result. Note that this is substantially different from the AVR ATmega and the MSP430X; on the M0 the upper half of the 64-bit result is cut off. For our purpose of fast multi-precision integer arithmetic, we consider the multiplier as a 16-bit multiplier. The main difference to AVR and MSP430X is then, that the result is produced in only one register. The M0 is available in two configurations, where multiplication either costs 1 cycle or 32 cycles. In this paper we focus on M0 systems featuring the single-cycle hardware multiplier, a design choice present on most M0 implementations that we are aware of. All arithmetic and logic operations, including the multiplication operate on 32-bit inputs and outputs. They all require a single clock cycle. The M0 uses a von Neumann memory architecture with a single bus being used for both, code and data. Consequently all load and store instructions require one additional cycle for the instruction fetch. This constitutes one of the key bottlenecks to consider for the implementation of the arithmetic algorithms. Since a typical load/store instruction requires 2 cycles, while an arithmetic or multiplication operation only takes a single cycle, it is very important to make best usage of the limited memory bandwidth. Consequently it is part of our strategy to make loads and stores always operate on full 32-bit operands and use the load and store multiple (LDM/STM) instructions wherever possible. These LDM/STM instructions transfer n (up to eight) 32-bit words in one instruction, with a cost of only \(n+1\) cycles. Like the other two platforms considered in this paper, the ARM Cortex-M0 also comes in very different memory configurations. The STM32F0-Value chips have between 16 and 256 KB of ROM and between 4 and 32 KB of RAM. For our benchmarks and tests we used a development board with an STM32F051R8T6 microcontroller with 64 KB of ROM and 8 KB of RAM. All cycle counts for arithmetic operations reported in this section have been obtained using the systick counter on this development board. In comparison to the other architectures discussed in this paper, the M0 platform benefits from its single-cycle \(32\times 32\rightarrow 32\)-bit multiplication instruction that directly operates on the general-purpose register file. The weakness of this architecture is its slow memory interface and the restrictions resulting from the 16-bit encoding of instructions: the small register set of only 8 registers \(\mathtt{r0},\ldots ,\mathtt{r7}\) that can be used in arithmetic instructions and memory access. In our Cortex-M0 implementation we use an unsigned radix-\(2^{32}\) representation for field elements. An element f in \(\mathbb {F}_{2^{255}-19}\) is thus represented as \(f=\sum _{i=0}^{7} f_i2^{32i} \,\hat{=}\,(f_0, f_1, \ldots f_{7})\) with \(f_i \in \{0, \ldots , 2^{32}-1\}\). It turns out that the most efficient strategy for multiplication of \(n=256\)-bit operands is a three-level refined Karatsuba method. To obtain a constant-time behavior and avoid the carry propagation, we use a variant of subtractive Karatsuba. The n-bit input operands \(A = A_\ell + 2^{n/2}A_h\) and \(B = B_\ell + 2^{n/2} B_h\) are first decomposed into a lower and a higher half. Then one computes the partial products \(L=A_h\cdot B_h\) and and \(H=A_h\cdot B_h\). The subtractive Karatsuba formulas involve a product term \(M=(A_\ell -A_h)\cdot (B_\ell -B_h)\) which may be either positive or negative. The full result may then be calculated by use of the subtractive Karatsuba formula \(A \cdot B = L + 2^{n/2} (L + H - M) + 2^n \cdot H\). By use of the refined Karatsuba method, we reduce the storage needed to calculate the middle part M and at the same time we save several additions on each Karatsuba level. Analysis of the low-level constraints of the CPU architecture revealed that it is considerably more efficient not to use a signed multiplication yielding M directly but to first calculate the absolute value \(|M|=|A_\ell -A_h|\cdot |B_\ell -B_h|\) and separately keep track of the sign t of the result. This stems mainly from the observation that sign changes (i.e. two's complements) of operands may be calculated in-place without requiring temporary spill registers. Actually the variant in our M0 implementation swaps the difference of one factor of |M|, i.e., \(|M|=|A_\ell -A_h|\cdot |B_h-B_\ell |\) and compensates for this by toggling the sign bit t. This makes branch-free combination of the partial results slightly more efficient. The calculation, thus, involves calculating the absolute value of the differences \(|A_\ell -A_h|\) and \(|B_h - B_\ell |\), the sign t and a conditional negation of the positive result |M|. As in the AVR implementation, we do not use any conditional branches, but instead use conditional computation of the two's complements. Note that the conditional calculation of the two's complement involves first a bitwise exclusive or operation with either 0 or \(-1\), depending on the sign. Subsequently a subtraction operation of either \(-1\) or 0 follows, being equivalent to addition of 1 or 0. For our implementation, we represent the field elements as arrays of eight 32-bit words. Since the architecture only provides a precision of 16-bit on its multiplier, we obtain a 32-bit multiplication with 17 arithmetic instructions: 4 to convert the registers from 32 to 16 bits, 4 multiplications, 1 to save an extra input (multiplication overwrites one of the inputs), and 8 instructions (4 additions and 4 shifts) to add the middle part into the final result. Since the 32-bit multiplication requires at least 5 registers, register-to-register moves between the low and high part of the register file are required to perform more than one multiplication. We obtain the 256-bit product using three 128-bit multiplications, each one with a cost of 332 cycles. The 128-bit multiplier uses three 64-bit multiplications which only take 81 cycles each. The full 256-bit multiplication requires 1294 cycles, about 700 cycles faster than a fully unrolled product-scanning multiplication. For squaring we also use three levels of refined subtractive Karatsuba. We use the same two observations as for the AVR to improve squaring performance compared to multiplication performance. First all of the partial results M, L and H entering the Karatsuba formula are solely determined by squaring operations, i.e. no full multiplication is involved. Conventional squaring of an operand \(A = A_\ell + 2^{k}A_h\) would have required two squarings of the lower and upper halves \(A_\ell ^2\) and \(A_h^2\) and one multiplication for the mixed term \(A_\ell \cdot A_h\). Aside from arithmetic simplification, a big benefit of avoiding this mixed-term multiplication is that one input operand fetch and register spills to memory may be spared because for squarings we have only one input operand. This benefit clearly outweighs the extra complexity linked to the additional additions and subtractions within the Karatsuba formula. Second it is easily observed that the sign of the operand M is known to be positive from the very beginning. The conditional sign change of the intermediate operand M is thus not necessary. The 64-bit squaring takes 53 cycles using only seven registers; our 128-bit squaring takes only 206 cycles, with the advantage that we handle all temporary storage with the upper half of the register file, i.e. no use of the stack is required. Our 256-bit squaring algorithm requires 857 cycles for 256-bit operands, in comparison to 1110 cycles for an unrolled product-scanning squaring. As expected, the benefit of using Karatsuba is much smaller than for multiplication. Still the difference between squaring and multiplication is significant, clearly justifying to use a specialized squaring algorithm when optimizing for speed. For multiplication and squaring we did not merge multiplication and reduction due to the high register pressure. Merging the operations would have led to many register spills. For these operations, we first implement a standard long-integer arithmetic and reduce the result in a second step. We use separate functions for multiplication and reduction Throughout the X25519 calculation we reduce modulo \(2^{256} - 38\) and even allow temporary results to reach up to \(2^{256}-1\). Full reduction is used only for the final result. For addition, subtraction and multiplication with the curve constant 121666, we use a different strategy and reduce the result on the fly in registers before writing results back to memory. For these simple operations, it is possible to perform all of the arithmetic and reduction without requiring register spills to the stack. The cycle counts for these operations are summarized in Table 1. Multiplication with the curve constant is implemented by a combination of addition and multiplication. Since the constant has 17 significant bits, multiplication is implemented by a combination of a 16-bit multiplication and a 16-bit shift-and-add operation. Table 1 Cycle count on ARM Cortex M0 with single-cycle multiplier for assembly optimized implementation and optimization for speed The strategy for reducing on the fly consists of two steps. First, the arithmetic operation (addition, subtraction, multiplication by 121666) is implemented on the most significant word. This generates carries in bits 255 and higher that need to be reduced. We strip off these carries resulting from the most significant word (setting bits 255 and higher of the result to zero) and merge the arithmetic for the lower words with reduction. This may result in an additional carry into the most significant word. However, these carries may readily be stored in bit 255 of the most significant word. This way a second carry chain is avoided. Results and comparison This section describes our implementation results for the X25519 Diffie–Hellman key-exchange on the aforementioned platforms. We present performance results in terms of the required clock cycles for one scalar multiplication. We furthermore report the required storage and RAM space. A full Diffie–Hellman key exchange requires one scalar multiplication of a fixed-basepoint and one variable-point scalar multiplication. Our software does not specialize fixed-basepoint scalar multiplication; the cost for a complete key exchange can thus be obtained by multiplying our cycle counts for one scalar multiplication by two. We compare our results to previous implementations of elliptic-curve scalar multiplication at the 128-bit security level (and selected high-performance implementations at slightly lower security levels) on the considered platforms. Results and comparison on AVR ATmega Our results for X25519 scalar multiplication on the AVR ATmega family of microcontrollers and a comparison with previous work are summarized in Table 2. As described in Sect. 3, all low-level functions are written in assembly. The high-level functionality is written in C; for compilation we used gcc-4.8.1 with compiler options -mmcu=atmega2560 -O3 -mcall-prologues. Unlike the cycle counts for subroutines reported in Sect. 3, all cycle counts reported for full elliptic-curve scalar multiplication reported here were measured using the built-in cycle counters on an Arduino MEGA development board with an ATmega2560 microcontroller. To achieve sufficient precision for the cycle counts, we combined an 8-bit and a 16-bit cycle counter to a 24-bit cycle counter. Table 2 Cycle counts, sizes, and stack usage of elliptic-curve scalar-multiplication software for AVR ATmega microcontrollers Many implementations of elliptic-curve cryptography exist for the AVR ATmega; however, most of them aim at lower security levels of 80 or 96 bits. For example the TinyECC library by Liu and Ning implements ECDSA, ECDH, and ECIES on the 128-bit, 160-bit, and 192-bit SECG curves [27]. NanoECC by Szczechowiak, Oliveira, Scott, Collier, and Dahab uses the NIST K-163 curve [40]. Also recent ECC software for the AVR ATmega uses relatively low-security curves. For example, Liu et al. [28] report new speed records for elliptic-curve cryptography on the NIST P-192 curve. Also Dalin, Großschädl, Liu, Mßller, and Zhang focus on the 80-bit and 96-bit security levels for their optimized implementation of ECC with twisted Edwards curves presented in [14]. Table 2 summarizes the results for elliptic-curve variable-basepoint scalar multiplication on curves that offer at least 112 bits of security. Not only are both of our implementations more than 1.5 times faster than all previous implementations of ECC at the 128-bit security level, the small implementation is also considerably smaller than all previous implementations. As also stated in the footnote, the size comparison with the MoTE-ECC software presented by Liu, Wenger, and Großschädl [29] is not fair, because their software optimizes also fixed-basepoint scalar multiplication and claims a performance of 30,510,000 cycles for ephemeral Diffie–Hellman (one fixed-point and one variable-point scalar multiplication). Even under the assumption that this is the right measure for ECDH performance—which means that ephemeral keys are not re-used for several sessions, for a discussion, see [11, Appendix D]—our small implementations offers better speed and size than the one presented in [29]. The only implementation that is smaller than ours and offers reasonably close performance is the one by Gura, Patel, Wander, Eberle, and Chang Shantz presented in [20]; however, that implementation is using a curve that offers only 112 bits of security. The only implementation that is faster than ours is the DH software on the NIST-K233 curve by Aranha, Dahab, López, and Oliveira presented in [2]; however, this software also offers only 112 bits of security, has very large ROM and RAM consumptions, and uses a binary elliptic-curve with efficiently computable endomorphisms, which is commonly considered a less conservative choice. As pointed out in the footnote, the size comparision to [2] is also not entirely fair because their software also contains a specialized fixed-basedpoint scalar multiplication. Results and comparison on MSP430X Our results for Curve25519 on the MSP430X microcontroller and a comparison with related previous work are summarized in Table 3. As for the AVR comparison, we only list results that target reasonably high security levels. For our implementation we report cycle counts of the MSP430FR5969 for 8 MHz and 16 MHz. One might think that the cycle counts are independent of the frequency; however, due to the limited access frequency of the non-volatile (FRAM) memory of the MSP430FR5969 (see Sect. 4), core clock frequencies beyond 8 MHz introduce wait cycles for memory access. Table 3 Cycle counts, sizes, and stack usage of elliptic-curve scalar-multiplication software for MSP430X microcontrollers As mentioned in Sect. 4, all arithmetic operations in \({\mathbb {F}_{2^{255}-19}}\) (aside from inversion) are implemented in assembly. The high-level functionality is written in C; for compilation we used gcc-4.6.3 with compiler options -mmcu=msp430fr5969 -O3. All cycle counts reported in this section were obtained by measuring the cycle count when executing the code on an MSP-EXP430FR5969 Launchpad Development Kit [43], using the cycle counters of the chip, unlike Sect. 4 where cycle counts on the board were obtained using the debugging functionality of the IAR Embedded Workbench IDE. These cycle counters have a resolution of only 16-bits, which is not enough to benchmark our software. We use a divisor of 8 (i.e., the counter is increased every 8 cycles) and increase a global 64-bit variable every time an overflow interrupt of the on-chip counter is triggered. This gives us a counter with reasonable resolution and relatively low interrupt-handling overhead and makes it possible to independently reproduce our results without the use of the proprietary IAR Workbench IDE. Naturally the implementation that makes use of the 32-bit hardware multiplier executes in fewer cycles and requires less program storage space than the implementation that only requires a 16-bit hardware multiplier. This is because fewer load and store instructions to the peripheral registers of the hardware multiplier have to be executed. A plethora of literature describes implementations of elliptic curve cryptography on the MSP430 microcontroller architecture, while only few of those works describe an implementation at the 256-bit security level. The first implementation of ECC on an MSP430 microcontroller was presented in 2001 by Guajardo, Blümel, Krieger, and Paar. Their implementation at the 64-bit security level executes in 3.4 million clock cycles [19]. In 2009, Gouvêa and López reported speed records for 160 and 256-bit finite-field multiplications on the MSP430 needing 1586 and 3597 cycles, respectively [17]. Their 256-bit Montgomery-ladder scalar multiplication requires 20.4 million clock cycles; their 4-NAF and 5-NAF versions require 13.4 and 13.2 million cycles, respectively. In 2011, Wenger and Werner compared ECC scalar multiplications on various 16-bit microcontrollers [44]. Their Montgomery-ladder-based scalar multiplication on the NIST P-256 elliptic curve executes in 23.9 million cycles on the MSP430. Pendl, Pelnar, and Hutter presented the first ECC implementation running on the WISP UHF RFID tag the same year [35]. Their implementation of the NIST P-192 curve achieves an execution time of around 10 million clock cycles. They also reported the first 192-bit multi-precision multiplication results needing 2581 cycles. Gouvêa, Oliveira, and López reported new speed records for different MSP430X architectures in 2012 [18], improving their results from [17]. For the MSP430X architecture (with a 16-bit multiplier) their 160-bit and 256-bit finite-field multiplication implementations execute in 1299 and 2981 cycles, respectively. In 2013, Wenger, Unterluggauer, and Werner [45] presented an MSP430 clone with instruction-set extension to accelerate big-integer arithmetic. For a NIST P-256 elliptic curve, their Montgomery ladder implementation using randomized projective coordinates and multiple point validation checks requires 9 million clock cycles. Without instruction-set extensions their implementation needs 22.2 million cycles. Results and comparison on ARM Cortex M0 Our results for Curve25519 on ARM Cortex-M0 and a comparison with related work are summarized in Table 4. As described in Sect. 5, all low-level functions for arithmetic in \({\mathbb {F}_{2^{255}-19}}\) (except for inversion, addition and subtraction) are implemented in assembly. It turned out that the addition and subtraction code generated by the compiler was almost as efficient as hand-optimized assembly. Higher-level functions are implemented in C; for compilation we used clang 3.5.0. For C files we use a 3-stage compilation process. First we translate with clang -fshort-enums -mcpu=cortex-m0 -mthumb -emit-llvm -c -nostdlib -ffreestanding -target arm-none-eabi -mfloat-abi=soft scalar mult.c to obtain a .bc file, which is then optimized with opt -Os -misched= ilpmin -misched-regpressure -enable-misched -inline and further translated to a .s file with llc -misched=ilpmin -enable-misched -misched -regpressure. As a result of these settings, addition and subtraction functions were fully inlined. This improves speed in comparison to calls to assembly functions by avoiding the function call overhead (at the expense of roughly 1 KB larger code). We obtained cycle counts from the systick cycle counter of an STM32F0Discovery development board. We also experimented with an LPC1114 Cortex-M0 chip but were unable to achieve the full performance of the Cortex-M0 even for very simple code (like a sequence of 1000 NOPs). For the "default" power profile the cycle counts we obtained were exactly a factor of 1.25 higher than expected. When switching to the "performance" profile (see [34, Sect. 7.16.5]), we achieved better performance, but still not the expected cycle counts. Table 4 Cycle counts, sizes, and stack usage of elliptic-curve scalar-multiplication software for ARM Cortex-M0 microcontrollers ARM's Cortex-M microcontrollers are rapidly becoming the device of choice for applications that previously used less powerful 8-bit or 16-bit microcontrollers. It is surprising to see that there is relatively little previous work on speeding up ECC on Cortex-M microcontrollers and in particular on the Cortex-M0. Probably the most impressive previous work has recently been presented by De Clerq, Uhsadel, Van Herrewege, and Verbauwhede who achieve a performance of 2,762,000 cycles for variable base-point scalar multiplication on the 233-bit Koblitz curve sect233k1 [15]. This result is hard to directly compare to our result for three reasons. First the curve is somewhat smaller and targets the 112-bit security level rather than then 128-bit security level targeted by our implementation. Second the implementation in [15] is not protected against timing attacks. Third the software presented in [15] performs arithmetic on an elliptic-curve over a binary field. All the underlying field arithmetic is thus very different. The only scientific paper that we are aware of that optimizes arithmetic on an elliptic curve over a large-characteristic prime field for the Cortex-M0 is the 2013 paper by Wenger, Unterluggauer, and Werner [45]. Their scalar multiplication on the secp256r1 curve is reported to take 10,730,000 cycles, almost exactly 3 times slower than our result. Agence nationale de la sécurité des systèmes d'information. Avis relatif aux paramètres de courbes elliptiques définis par l'Etat français. Journal officiel de la République Française, 0241, 17533 (2011). http://www.legifrance.gouv.fr/affichTexte.do?cidTexte=JORFTEXT000024668816. Aranha D.F., Dahab R., López J., Oliveira L.B.: Efficient implementation of elliptic curve cryptography in wireless sensors. Adv. Math. Commun. 4(2), 169–187 (2010). Atmel Corporation: AVR1519: XMEGA-A1 Xplained Training—XMEGA Crypto Engines. 8-bit Atmel Microcontrollers Application Note (2011). http://www.atmel.com/Images/doc8405.pdf. Batina L., Chmielewski, Ł., Papachristodoulou L., Schwabe P., Tunstall M.: Online template attacks. In: Meier W., Mukhopadhyay D. (eds.) Progress in Cryptology—INDOCRYPT 2014. Lecture Notes in Computer Science, vol. 21–36, p. 8885. Springer, Berlin (2014). http://cryptojedi.org/papers/#ota. Bernstein D.J.: Curve25519: new Diffie-Hellman speed records. In: Yung M., Dodis Y., Kiayias A., Malkin T. (eds.) Public Key Cryptography—PKC 2006. Lecture Notes in Computer Science, vol. 3958, pp. 207–228. Springer, Berlin (2006). http://cr.yp.to/papers.html#curve25519. Bernstein D.J.: 25519 naming. Posting to the CFRG mailing list (2014). https://www.ietf.org/mail-archive/web/cfrg/current/msg04996.html. Bernstein D.J., Schwabe, P.: NEON crypto. In: Prouff E., Schaumont P. (eds.) Cryptographic Hardware and Embedded Systems—CHES 2012. Lecture Notes in Computer Science, vol. 7428, pp. 320–339. Springer, Berlin (2012). http://cryptojedi.org/papers/#neoncrypto. Bernstein D.J., Duif N., Lange T., Schwabe P., Yang B.-Y.: High-speed high-security signatures. In: Preneel B., Takagi T (eds.) Cryptographic Hardware and Embedded Systems—CHES 2011. Lecture Notes in Computer Science, vol. 6917, pp. 124–142. Springer, Berlin (2011). see also full version [9]. Bernstein D.J., Duif N., Lange T., Schwabe P., Yang B.-Y.: High-speed high-security signatures. J. Cryptogr. Eng. 2(2), 77–89 (2012). http://cryptojedi.org/papers/#ed25519, see also short version [8]. Bernstein D.J., Chou T., Chuengsatiansup C., Hülsing A., Lange T., Niederhagen R., van Vredendaal C.: How to manipulate curve standards: a white paper for the black hat. Cryptology ePrint Archive, Report 2014/571 (2014). http://eprint.iacr.org/2014/571, see also http://safecurves.cr.yp.to/bada55.html. Bernstein D.J., Chuengsatiansup C., Lange T., Schwabe P.: Kummer strikes back: new DH speed records. In: Iwata T., Sarkar P (eds.) Advances in Cryptology—ASIACRYPT 2014. Lecture Notes in Computer Science, vol. 8873, pp. 317–337. Springer, Berlin (2014). Full version: http://cryptojedi.org/papers/#kummer. Clavier C., Feix B., Gagnerot G., Roussellet M., Verneuil V.: Horizontal correlation analysis on exponentiation. In: Soriano M., Qing S., López J. (eds.) Information and Communications Security. Lecture Notes in Computer Science, vol. 6476, pp. 46–61. Springer, Berlin (2010). http://eprint.iacr.org/2003/237. Costigan N., Schwabe P.: Fast elliptic-curve cryptography on the Cell Broadband Engine. In: Preneel B (ed.) Progress in Cryptology—AFRICACRYPT 2009. Lecture Notes in Computer Science, vol. 5580, pp. 368–385. Springer, Berlin (2009). http://cryptojedi.org/papers/#celldh. Dalin D., Großschädl J., Liu Z., Müller V., Zhang W.: Twisted edwards-form elliptic curve cryptography for 8-bit AVR-based sensor nodes. In: Xu S., Zhao Y. (eds.) Proceeding of the 1st ACM Workshop on Asia Public-key Cryptography—AsiaPKC 2013, pp. 39–44. ACM, New York (2013). http://orbilu.uni.lu/handle/10993/14765. De Clercq R., Uhsadel L., Van Herrewege A., Verbauwhede I.: Ultra low-power implementation of ECC on the ARM Cortex-M0+. In: DAC '14 Proceedings of the The 51st Annual Design Automation Conference on Design Automation Conference, pp. 1–6. ACM, New York (2014). https://www.cosic.esat.kuleuven.be/publications/article-2401.pdf. Gouvêa C.P.L.: Personal communication (2014). Gouvêa C.P.L., López J.: Software implementation of pairing-based cryptography on sensor networks using the MSP430 microcontroller. In: Sendrier N., Roy B. (eds.) Progress in Cryptology—INDOCRYPT 2009. Lecture Notes in Computer Science, vol. 5922, pp. 248–262. Springer, Berlin (2009). http://conradoplg.cryptoland.net/files/2010/12/indocrypt09.pdf. Gouvêa C.P.L., Oliveira L.B., López J.: Efficient software implementation of public-key cryptography on sensor networks using the MSP430X microcontroller. J. Cryptogr. Eng. 2(1) (2012). http://conradoplg.cryptoland.net/files/2010/12/jcen12.pdf. Guajardo J., Blümel R., Krieger U., Paar C.: Efficient implementation of elliptic curve cryptosystems on the TI MSP430x33x family of microcontrollers. In: Kim K (ed.) Public Key Cryptography—PKC 2001. Lecture Notes in Computer Science, vol. 1992, pp. 365–382. Springer, Berlin (2001). http://www.emsec.rub.de/media/crypto/veroeffentlichungen/2011/01/21/guajardopkc2001_msp430.pdf. Gura N., Patel A., Wander A., Eberle H., Shantz S.C.: Comparing elliptic curve cryptography and RSA on 8-bit CPUs. In: Joye M (ed.) Cryptographic Hardware and Embedded Systems—CHES 2004. Lecture Notes in Computer Science, vol. 3156, pp. 119–132. Springer, Berlin (2004). www.iacr.org/archive/ches2004/31560117/31560117.pdf. Hinterwälder G., Moradi A., Hutter M., Schwabe P., Paar C.: Full-size high-security ECC implementation on MSP430 microcontrollers. In: Third International Conference on Cryptology and Information Security in Latin America—Latincrypt 2014. Lecture Notes in Computer Science. Springer, Berlin (2014). http://www.emsec.rub.de/research/publications/Curve25519MSPLatin2014/. Hutter M., Wenger E.: Fast multi-precision multiplication for public-key cryptography on embedded microprocessors. In: Preneel, B., Takagi T. (eds.) Cryptographic Hardware and Embedded Systems—CHES 2011. Lecture Notes in Computer Science, vol. 6917, pp. 459–474. Springer, Berlin (2011). http://mhutter.org/papers/Hutter2011FastMultiPrecision.pdf. Hutter M., Schwabe P.: NaCl on 8-bit AVR microcontrollers. In: Youssef A., Nitaj A. (eds.) Progress in Cryptology—AFRICACRYPT 2013. Lecture Notes in Computer Science, vol. 7918, pp. 156–172. Springer, Berlin (2013). http://cryptojedi.org/papers/#avrnacl. Hutter M., Schwabe P.: Multiprecision multiplication on AVR revisited (2014). http://cryptojedi.org/papers/#avrmul. Kenny P.: Formal request from TLS WG to CFRG for new elliptic curves. Posting to the CFRG mailing list (2014). http://www.ietf.org/mail-archive/web/cfrg/current/msg04655.html. Koblitz N.: Elliptic curve cryptosystems. Math. Comput. 48(177), 203–209 (1987). http://www.ams.org/journals/mcom/1987-48-177/S0025-5718-1987-0866109-5/S0025-5718-1987-0866109-5.pdf. Liu A., Ning P.: TinyECC: a configurable library for elliptic curve cryptography in wireless sensor networks. In: International Conference on Information Processing in Sensor Networks—IPSN 2008(April), pp. 22–24, 2008. St. Louis, Missouri, USA, Proceedings, pp. 245–256 (2008). http://discovery.csc.ncsu.edu/pubs/ipsn08-TinyECC-IEEE.pdf. Liu Z., Seo H., Großschädl J., Kim H.: Efficient implementation of NIST-compliant elliptic curve cryptography for sensor nodes. In: Qing S., Zhou J., Liu D. (eds.) Information and Communications Security. Lecture Notes in Computer Science, vol. 8233, pp. 302–317. Springer, Berlin (2013). http://orbilu.uni.lu/bitstream/10993/12934/1/ICICS2013.pdf. Liu Z., Großschädl J., Wenger E.: MoTE-ECC: energy-scalable elliptic curve cryptography for wireless sensor networks. In: Applied Cryptography and Network Security. Lecture Notes in Computer Science, vol. 8479, pp. 361–379. Springer, Berlin (2014). https://online.tugraz.at/tug_online/voe_main2.getvolltext?pCurrPk=77985. Lochter M., Merkle J.: Elliptic curve cryptography (ECC) Brainpool standard curves and curve generation. IETF Request for Comments 5639 (2010). http://tools.ietf.org/html/rfc5639. Miller V.S.: Use of elliptic curves in cryptography. In: Williams H.C. (ed.) Advances in Cryptology—CRYPTO '85: Proceedings. Lecture Notes in Computer Science, vol. 218, pp. 417–426. Springer, Berlin (1986). Montgomery P.L.: Speeding the Pollard and elliptic curve methods of factorization. Math. Comput. 48(177), 243–264 (1987). http://www.ams.org/journals/mcom/1987-48-177/S0025-5718-1987-0866113-7/S0025-5718-1987-0866113-7.pdf. National Institute of Standards and Technology. FIPS PUB 186–4 digital signature standard (DSS) (2013). http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf. NXP. LPC1110/11/12/13/14/15 32-bit ARM Cortex-M0 microcontroller; up to 64 kB flash and 8 kB SRAM. Product data sheet, rev. 9.2 edition (2014). http://www.nxp.com/documents/data_sheet/LPC111X.pdf. Pendl C., Pelnar M., Hutter M.: Elliptic curve cryptography on the WISP UHF RFID tag. In: Juels A., Paar C. (eds.) 8th Workshop on RFID Security and Privacy—RFIDsec 2012. Lecture Notes in Computer Science, vol. 7055, pp. 32–47. Springer, Berlin (2012). http://mhutter.org/papers/Pendl2011EllipticCurveCryptography.pdf. ProcFig0. Public key cryptographic algorithm SM2 based on elliptic curves. Part 1: General. (2012). http://www.oscca.gov.cn/UpFile/2010122214822692.pdf. Scott M.: Re: NIST announces set of elliptic curves. Posting to the sci.crypt mailing list (1999). https://groups.google.com/forum/message/raw?msg=sci.crypt/mFMukSsORmI/FpbHDQ6hM_MJ. Seo H., Kim H.: Multi-precision multiplication for public-key cryptography on embedded microprocessors. In: Lee D.H., Yung M (eds.) Information Security Applications. Lecture Notes in Computer Science, vol. 7690, pp. 55–67. Springer, Berlin (2012). doi:10.1007/978-3-642-35416-8 Seo H., Kim H.: Optimized multi-precision multiplication for public-key cryptography on embedded microprocessors. Int. J. Comput. Commun. Eng. 2(3), (2013). http://www.ijcce.org/papers/183-J034.pdf. Szczechowiak P., Oliveira L.B., Scott M., Collier M., Dahab R.: NanoECC: testing the limits of elliptic curve cryptography in sensor networks. In: Verdone R. (ed.) Wireless Sensor Networks. Lecture Notes in Computer Science, vol. 4913, pp. 305–320. Springer, Berlin (2008). http://www.ic.unicamp.br/~leob/publications/ewsn/NanoECC.pdf. Texas Instruments Incorporated. MSP430FR58xx, MSP430FR59xx, MSP430FR68xx, and MSP430FR69xx family user's guide (2012). www.ti.com.cn/cn/lit/ug/slau367f/slau367f.pdf. Texas Instruments Incorporated. MSP430x2xx family user's guide (2004). http://www.ti.com/lit/ug/slau144j/slau144j.pdf. Texas Instruments Incorporated. MSP-EXP430FR5969 LaunchPad Development Kit user's guide (2014). http://www.ti.com/lit/ug/slau535a/slau535a.pdf. Wenger E., Werner M.: Evaluating 16-bit processors for elliptic curve cryptography. In: Prouff E. (ed.) Smart Card Research and Advanced Applications—CARDIS 2011. Lecture Notes in Computer Science, vol. 7079, pp. 166–181. Springer, Berlin (2011). https://online.tugraz.at/tug_online/voe_main2.getvolltext?pCurrPk=59062. Wenger E., Unterluggauer T., Werner M.: 8/16/32 shades of elliptic curve cryptography on embedded processors. In: Paul G., Vaudenay S. (eds.) Progress in Cryptology—INDOCRYPT 2013. Lecture Notes in Computer Science, vol. 8250, pp. 244–261. Springer, Berlin (2013). https://online.tugraz.at/tug_online/voe_main2.getvolltext?pCurrPk=72486. The authors would like to thank Daniel Bernstein for his suggestion to reverse an input to the modular multiplication implementation for the MSP430. Horst Görtz Institute for IT-Security, Ruhr-University Bochum, 44801, Bochum, Germany Michael Düll, Gesine Hinterwälder & Christof Paar Endress+Hauser Conducta GmbH+Co. KG, Dieselstraße 24, 70839, Gerlingen, Germany Björn Haase Cryptography Research, 425 Market Street, 11th Floor, San Francisco, CA, 94105, USA Digital Security Group, Radboud University, PO Box 9010, 6500 GL, Nijmegen, The Netherlands Ana Helena Sánchez & Peter Schwabe Michael Düll Gesine Hinterwälder Christof Paar Ana Helena Sánchez Correspondence to Peter Schwabe. This is one of several papers published in Designs, Codes and Cryptography comprising the "Special Issue on Cryptography, Codes, Designs and Finite Fields: In Memory of Scott A. Vanstone". This work was supported by the Austrian Science Fund (FWF) under the grant number TRP251-N23, by the Netherlands Organisation for Scientific Research (NWO) through Veni 2013 project 13114, by the European Cooperation in Science and Technology (COST) Action IC1204 (Trustworthy Manufacturing and Utilization of Secure Devices—TRUDEVICE), and by the German Federal Ministry for Economic Affairs and Energy (Grant 01ME12025 SecMobil). Work was done while Michael Hutter was with Graz University of Technology, Austria. Permanent ID of this document: bd41e6b96370dea91c5858f1b809b581. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Düll, M., Haase, B., Hinterwälder, G. et al. High-speed Curve25519 on 8-bit, 16-bit, and 32-bit microcontrollers. Des. Codes Cryptogr. 77, 493–514 (2015). https://doi.org/10.1007/s10623-015-0087-1 Revised: 17 April 2015 Issue Date: December 2015 Curve25519 ECDH key-exchange Elliptic-curve cryptography Embedded devices AVR ATmega MSP430 ARM Cortex-M0
CommonCrawl
Population genetics, Evolutionary biology Genetic drift or allelic drift is the change in the frequency of a gene variant (allele) in a population due to random sampling.[1] The alleles in the offspring are a sample of those in the parents, and chance has a role in determining whether a given individual survives and reproduces. A population's allele frequency is the fraction of the copies of one gene that share a particular form.[2] Genetic drift may cause gene variants to disappear completely and thereby reduce genetic variation. When there are few copies of an allele, the effect of genetic drift is larger, and when there are many copies the effect is smaller. Vigorous debates occurred over the relative importance of natural selection versus neutral processes, including genetic drift. Ronald Fisher held the view that genetic drift plays at the most a minor role in evolution, and this remained the dominant view for several decades. In 1968 Motoo Kimura rekindled the debate with his neutral theory of molecular evolution, which claims that most instances where a genetic change spreads across a population (although not necessarily changes in phenotypes) are caused by genetic drift.[3] Analogy with cans of paint Edit A painter P1 has 20 cans of paint, 10 red and 10 blue, in the back of his truck. Another painter (P2, representing the following generation) comes up with 20 empty cans that he wants filled. The first painter, obliging, fills an empty can from one of his own randomly selected cans, and returns his can to the back of his truck. He continues this way, filling empty cans from his (which we will pretend never run out of paint), until all 20 are filled. P2 takes his 20 cans, now full of paint, off to the back of his own truck. P3 awaits him with 20 empty cans. Continue ad infinitum. Eventually, a painter will walk away with 20 cans all of the same color. It is obvious that this can happen at any generation - if, for example, P1 had by chance always chosen one of his red cans when filling P2's empties, P2 would have walked away with 20 cans of red paint - and, therefore, would not have any blue paint to give to P3, who would have to be content with all red. Slightly less obvious is that the odds of this happening become smaller if the number of cans to be filled (that is, the size of the population) is larger. With 20 cans, the odds of P1 always picking a red can are (1/2)20 - about one in a million. Slim, but with 200 cans the odds become 1 in 1.6×1060 - less probable than picking out a specific atom from amongst all the atoms in the sun. More cans gives us a better statistical sample, meaning that the number of red and blue cans is likely to remain much closer to what it was in the previous generation. Genetic drift is weaker in large populations - the frequency of an allele (i.e., the fraction of cans that are red, in our example) will change more slowly between generations when the population size is large. Conversely, drift is fast in small populations - with only 4 cans, it might only take a few iterations before they are all the same color. Genetic drift thus tends to eliminate variation more quickly in small populations; large populations will tend to have greater genetic diversity.[4] File:Random sampling genetic drift.gif Probability and allele frequency Edit In probability theory, the law of large numbers predicts little change taking place over time when the population is large. When the reproductive population is small, however, the effects of sampling error can alter the allele frequencies significantly. Genetic drift is therefore considered to be a consequential mechanism of evolutionary change primarily within small, isolated populations.[5] This effect can be illustrated with a simplified example. Consider a very large colony of bacteria isolated in a drop of solution. The bacteria are genetically identical except for a single gene with two alleles labeled A and B. Half the bacteria have allele A and the other half have allele B. Thus both A and B have allele frequency 1/2. A and B are neutral alleles—meaning they do not affect the bacteria's ability to survive and reproduce. This being the case, all bacteria in this colony are equally likely to survive and reproduce. The drop of solution then shrinks until it has only enough food to sustain four bacteria. All the others die without reproducing. Among the four who survive, there are sixteen possible combinations for the A and B alleles: (A-A-A-A), (B-A-A-A), (A-B-A-A), (B-B-A-A), (A-A-B-A), (B-A-B-A), (A-B-B-A), (B-B-B-A), (A-A-A-B), (B-A-A-B), (A-B-A-B), (B-B-A-B), (A-A-B-B), (B-A-B-B), (A-B-B-B), (B-B-B-B). If each of the combinations with the same number of A and B respectively are counted, we get the following table. The probabilities are calculated with the slightly faulty premise that the peak population size was infinite. A B Combinations Probability 4 0 1 1/16 The probability of any one possible combination is $ \frac{1}{2} \cdot \frac{1}{2} \cdot \frac{1}{2} \cdot \frac{1}{2} = \frac{1}{16} $ where 1/2 (the probability of the A or B allele for each surviving bacterium) is multiplied four times (the total sample size, which in this example is the total number of surviving bacteria). As seen in the table, the total number of possible combinations to have an equal (conserved) number of A and B alleles is six, and its probability is 6/16. The total number of possible alternative combinations is ten, and the probability of unequal number of A and B alleles is 10/16. The total number of possible combinations can be represented as binomial coefficients and they can be derived from Pascal's triangle. The probability for any one of the possible combinations can be calculated with the formula $ {N\choose k} (1/2)^N\! $ where N is the number of bacteria and k is the number of A (or B) alleles in the combination. The function '()' signifies the binomial coefficient and can be expressed as "N choose k". Using the formula to calculate the probability that between them the surviving four bacteria have two A alleles and two B alleles.[6] $ {4\choose 2} \left ( \frac{1}{2} \right ) ^4 = 6 \cdot \frac{1}{16} = \frac{6}{16} $ Genetic drift occurs when a population's allele frequencies change due to random events. In this example the population contracted to just four random survivors, a phenomenon known as population bottleneck. The original colony began with an equal distribution of A and B alleles but chances are that the remaining population of four members has an unequal distribution. The probability that this surviving population will undergo drift (10/16) is higher than the probability that it will remain the same (6/16). Mathematical models of genetic drift Edit Mathematical models of genetic drift can be solved using either branching processes or a diffusion equation describing changes in allele frequency.[7] Wright-Fisher model Edit Consider a gene with two alleles, A or B. In diploid populations consisting of N individuals there are 2N copies of each gene. An individual can have two copies of the same allele or two different alleles. We can call the frequency of one allele p and the frequency of the other q. The Wright-Fisher model assumes that generations do not overlap. For example, annual plants have exactly one generation per year. Each copy of the gene found in the new generation is drawn independently at random from all copies of the gene in the old generation. The formula to calculate the probability of obtaining k copies of an allele that had frequency p in the last generation is then[8] $ \frac{(2N)!}{k!(2N-k)!} p^k q^{2N-k} $ where the symbol "!" signifies the factorial function. This expression can also be formulated using the binomial coefficient, $ {2N \choose k} p^k q^{2N-k} $ Moran model Edit The Moran model assumes overlapping generations. At each time step, one individual is chosen to reproduce and one individual is chosen to die. So in each timestep, the number of copies of a given allele can go up by one, go down by one, or can stay the same. This means that the transition matrix is tridiagonal, which means that mathematical solutions are easier for the Moran model than for the Wright-Fisher model. On the other hand, computer simulations are usually easier to perform using the Wright-Fisher model, because fewer time steps need to be calculated. In the Moran model, it takes N timesteps to get through one generation, where N is the effective population size. In the Wright-Fisher model, it takes just one. In practice, the Moran model and Wright-Fisher model give qualitatively similar results, but genetic drift runs twice as fast in the Moran model. Other models of drift Edit If the variance in the number of offspring is much greater than that given by the binomial distribution assumed by the Wright-Fisher model, then given the same overall speed of genetic drift (the variance effective population size), genetic drift is a less powerful force compared to selection.[9] Random effects other than sampling error Edit Random changes in allele frequencies can also be caused by effects other than sampling error, for example random changes in selection pressure.[10] One important alternative source of stochasticity, perhaps more important than genetic drift, is genetic draft.[11] Genetic draft is the effect on a locus by selection on linked loci. The mathematical properties of genetic draft are different from those of genetic drift.[12] The direction of the random change in allele frequency is autocorrelated across generations.[1] Drift and fixation Edit The Hardy–Weinberg principle states that within sufficiently large populations, the allele frequencies remain constant from one generation to the next unless the equilibrium is disturbed by migration, genetic mutation, or selection.[13] Populations do not gain new alleles from the random sampling of alleles passed to the next generation, but the sampling can cause an existing allele to disappear. Because random sampling can remove, but not replace, an allele, and because random declines or increases in allele frequency influence expected allele distributions for the next generation, genetic drift drives a population towards genetic uniformity over time. When an allele reaches a frequency of 1 (100%) it is said to be "fixed" in the population and when an allele reaches a frequency of 0 (0%) it is lost. Once an allele becomes fixed, genetic drift comes to a halt, and the allele frequency cannot change unless a new allele is introduced in the population via mutation or gene flow. Thus even while genetic drift is a random, directionless process, it acts to eliminate genetic variation over time.[14] Rate of allele frequency change under drift Edit File:Random genetic drift chart.png Assuming genetic drift is the only evolutionary force acting on an allele, after t generations in many replicated populations, starting with allele frequencies of p and q, the variance in allele frequency across those populations is $ V_t \approx pq\left(1-\exp\left\{-\frac{t}{2N_e} \right\}\right) $[15] Time to fixation or loss Edit Assuming genetic drift is the only evolutionary force acting on an allele, at any given time the probability that an allele will eventually become fixed in the population is simply its frequency in the population at that time.[16] For example, if the frequency p for allele A is 75% and the frequency q for allele B is 25%, then given unlimited time the probability A will ultimately become fixed in the population is 75% and the probability that B will become fixed is 25%. The expected number of generations for fixation to occur is proportional to the population size, such that fixation is predicted to occur much more rapidly in smaller populations.[17] Normally the effective population size, which is smaller than the total population, is used to determine these probabilities. The effective population (Ne) takes into account factors such as the level of inbreeding, the stage of the lifecycle in which the population is the smallest, and the fact that some neutral genes are genetically linked to others that are under selection.[18] The effective population size may not be the same for every gene in the same population.[19] One forward-looking formula used for approximating the expected time before a neutral allele becomes fixed through genetic drift, according to the Wright-Fisher model, is $ \bar{T}_{fixed} = \frac{-4N_e(1-p) \ln (1-p)}{p} $ where T is the number of generations, Ne is the effective population size, and p is the initial frequency for the given allele. The result is the number of generations expected to pass before fixation occurs for a given allele in a population with given size (Ne) and allele frequency (p).[20] The expected time for the neutral allele to be lost through genetic drift can be calculated as[8] $ \bar{T}_{lost} = \frac{-4N_ep}{1-p} \ln p. $ When a mutation appears only once in a population large enough for the initial frequency to be negligible, the formulas can be simplified to[21] $ \bar{T}_{fixed} = 4N_e $ for average number of generations expected before fixation of a neutral mutation, and $ \bar{T}_{lost} = 2 \left ( \frac{N_e}{N} \right ) \ln (2N) $ for the average number of generations expected before the loss of a neutral mutation.[22] Time to loss with both drift and mutation Edit The formulae above apply to an allele that is already present in a population, and which is subject to neither mutation nor selection. If an allele is lost by mutation much more often than it is gained by mutation, then mutation, as well as drift, may influence the time to loss. If the allele prone to mutational loss begins as fixed in the population, and is lost by mutation at rate m per replication, then the expected time in generations until its loss in a haploid population is given by $ \bar{T}_{lost} \approx \begin{cases} \frac{1}{m}, \; \textrm{if} \; mN_e \ll 1\\ \frac{\ln{(mN_e)}+\gamma}{m} \; \textrm{if} \; mN_e \gg 1\\ \end{cases} $ where $ \gamma $ is equal to Euler's constant.[23] The first approximation represents the waiting time until the first mutant destined for loss, with loss then occurring relatively rapidly by genetic drift, taking time Ne<<1/m. The second approximation represents the time needed for deterministic loss by mutation accumulation. In both cases, the time to fixation is dominated by mutation via the term 1/m, and is less affected by the effective population size. Genetic drift versus natural selection Edit Although both processes affect evolution, genetic drift operates randomly while natural selection functions non-randomly. While natural selection has a direction, guiding evolution towards heritable adaptations to the current environment, genetic drift has no direction and is guided only by the mathematics of chance.[24] As a result, drift acts upon the genotypic frequencies within a population without regard to their phenotypic effects. In contrast, selection favors the spread of alleles whose phenotypic effects increase survival and/or reproduction of their carriers. Selection lowers the frequencies of alleles that cause unfavorable traits, and ignores those that are neutral.[25] In natural populations, genetic drift and natural selection do not act in isolation; both forces are always at play. However, the degree to which alleles are affected by drift or selection varies according to population size. The magnitude of drift on allele frequencies per generation is larger in small populations. The magnitude of drift is large enough to overwhelm selection when the selection coefficient is less than 1 divided by the effective population size. As a result, drift affects the frequency of more alleles in small populations than in large ones.[26] When the allele frequency is very small, drift can also overpower selection—even in large populations. For example, while disadvantageous mutations are usually eliminated quickly in large populations, new advantageous mutations are almost as vulnerable to loss through genetic drift as are neutral mutations. Not until the allele frequency for the advantageous mutation reaches a certain threshold will genetic drift have an effect.[25] The mathematics of genetic drift depend on the effective population size, but it is not clear how this is related to the actual number of individuals in a population.[11] Genetic linkage to other genes that are under selection can reduce the effective population size experienced by a neutral allele. With a higher recombination rate, linkage decreases and with it this local effect on effective population size.[27][28] This effect is visible in molecular data as a correlation between local recombination rate and genetic diversity,[29] and negative correlation between gene density and diversity at noncoding sites.[30] Stochasticity associated with linkage to other genes that are under selection is not the same as sampling error, and is sometimes known as genetic draft in order to distinguish it from genetic drift.[11] Population bottleneck Edit Main article: Population bottleneck File:Population bottleneck.jpg A population bottleneck is when a population contracts to a significantly smaller size over a short period of time due to some random environmental event.[31] In a true population bottleneck, the odds for survival of any member of the population are purely random, and are not improved by any particular inherent genetic advantage. The bottleneck can result in radical changes in allele frequencies, completely independent of selection. The impact of a population bottleneck can be sustained, even when the bottleneck is caused by a one-time event such as a natural catastrophe. After a bottleneck, inbreeding increases. This increases the damage done by recessive deleterious mutations, in a process known as inbreeding depression. The worst of these mutations are selected against, leading to the loss of other alleles that are genetically linked to them, in a process of background selection.[1] This leads to a further loss of genetic diversity. In addition, a sustained reduction in population size increases the likelihood of further allele fluctuations from drift in generations to come. A population's genetic variation can be greatly reduced by a bottleneck, and even beneficial adaptations may be permanently eliminated.[32] The loss of variation leaves the surviving population vulnerable to any new selection pressures such as disease, climate change or shift in the available food source, because adapting in response to environmental changes requires sufficient genetic variation in the population for natural selection to take place.[33][34] There have been many known cases of population bottleneck in the recent past. Prior to the arrival of Europeans, North American prairies were habitat for millions of greater prairie chickens. In Illinois alone, their numbers plummeted from about 100 million birds in 1900 to about 50 birds in the 1990s. The declines in population resulted from hunting and habitat destruction, but the random consequence has been a loss of most of the species' genetic diversity. DNA analysis comparing birds from the mid century to birds in the 1990s documents a steep decline in the genetic variation in just in the latter few decades. Currently the greater prairie chicken is experiencing low reproductive success.[35] Over-hunting also caused a severe population bottleneck in the northern elephant seal in the 19th century. Their resulting decline in genetic variation can be deduced by comparing it to that of the southern elephant seal, which were not so aggressively hunted.[36] Founder effect Edit Main article: Founder effect File:Founder effect with drift.jpg The founder effect is a special case of a population bottleneck, occurring when a small group in a population splinters off from the original population and forms a new one. The random sample of alleles in the just formed new colony is expected to grossly misrepresent the original population in at least some respects.[37] It is even possible that the number of alleles for some genes in the original population is larger than the number of gene copies in the founders, making complete representation impossible. When a newly formed colony is small, its founders can strongly affect the population's genetic make-up far into the future. A well documented example is found in the Amish migration to Pennsylvania in 1744. Two members of the new colony shared the recessive allele for Ellis–van Creveld syndrome. Members of the colony and their descendants tend to be religious isolates and remain relatively insular. As a result of many generations of inbreeding, Ellis-van Creveld syndrome is now much more prevalent among the Amish than in the general population.[25][38] The difference in gene frequencies between the original population and colony may also trigger the two groups to diverge significantly over the course of many generations. As the difference, or genetic distance, increases, the two separated populations may become distinct, both genetically and phenetically, although not only genetic drift but also natural selection, gene flow, and mutation contribute to this divergence. This potential for relatively rapid changes in the colony's gene frequency led most scientists to consider the founder effect (and by extension, genetic drift) a significant driving force in the evolution of new species. Sewall Wright was the first to attach this significance to random drift and small, newly isolated populations with his shifting balance theory of speciation.[39] Following after Wright, Ernst Mayr created many persuasive models to show that the decline in genetic variation and small population size following the founder effect were critically important for new species to develop.[40] However, there is much less support for this view today since the hypothesis has been tested repeatedly through experimental research and the results have been equivocal at best.[41] History of the concept Edit The concept of genetic drift was first introduced by one of the founders in the field of population genetics, Sewall Wright. His first use of the term "drift" was in 1929,[42] though at the time he was using it in the sense of a directed process of change, or natural selection. Random drift by means of sampling error came to be known as the "Sewall-Wright effect", though he was never entirely comfortable to see his name given to it. Wright referred to all changes in allele frequency as either "steady drift" (e.g. selection) or "random drift" (e.g. sampling error).[43] "Drift" came to be adopted as a technical term in the stochastic sense exclusively.[44] Today it is usually defined still more narrowly, in terms of sampling error.[45] Wright wrote that the "restriction of "random drift" or even "drift" to only one component, the effects of accidents of sampling, tends to lead to confusion."[43] Sewall Wright considered the process of random genetic drift by means of sampling error equivalent to that by means of inbreeding, but later work has shown them to be distinct.[46] In the early days of the modern evolutionary synthesis, scientists were just beginning to blend the new science of population genetics with Charles Darwin's theory of natural selection. Working within this new framework, Wright focused on the effects of inbreeding on small relatively isolated populations. He introduced the concept of an adaptive landscape in which phenomena such as cross breeding and genetic drift in small populations could push them away from adaptive peaks, which in turn allow natural selection to push them towards new adaptive peaks.[47] Wright thought smaller populations were more suited for natural selection because "inbreeding was sufficiently intense to create new interaction systems through random drift but not intense enough to cause random nonadaptive fixation of genes."[44] Wright's views on the role of genetic drift in the evolutionary scheme were controversial almost from the very beginning. One of the most vociferous and influential critics was colleague Ronald Fisher. Fisher conceded genetic drift played some role in evolution, but an insignificant one. Fisher has been accused of misunderstanding Wright's views because in his criticisms Fisher seemed to argue Wright had rejected selection almost entirely. To Fisher, viewing the process of evolution as a long, steady, adaptive progression was the only way to explain the ever increasing complexity from simpler forms. But the debates have continued between the "gradualists" and those who lean more toward the Wright model of evolution where selection and drift together play an important role.[48] In 1968,[49] population geneticist Motoo Kimura rekindled the debate with his neutral theory of molecular evolution, which claims that most of the genetic changes are caused by genetic drift acting on neutral mutations.[3] The role of genetic drift by means of sampling error in evolution has been criticized by John H Gillespie[50] and Will Provine, who argue that selection on linked sites is a more important stochastic force. Allopatric speciation Antigenic drift Gene pool Small population size ↑ 1.0 1.1 1.2 Masel J (2011). Genetic drift. Current Biology 21 (20): R837–R838. ↑ Futuyma, Douglas (1998). Evolutionary Biology, Sinauer Associates. ↑ 3.0 3.1 Futuyma, Douglas (1998). Evolutionary Biology, Sinauer Associates. ↑ Evolution 101:Sampling Error and Evolution. University of California Berkeley. URL accessed on 2009-11-01. ↑ Zimmer, Carl (2002). Evolution : The Triumph of an Idea, 364, New York, NY: Perennial. ↑ Walker J. Introduction to Probability and Statistics. The RetroPsychoKinesis Project. Fourmilab. URL accessed on 2009-11-17. ↑ Wahl L.M. (2011). Fixation when N and s Vary: Classic Approaches Give Elegant New Results. Genetics 188 (4): 783–785. ↑ 8.0 8.1 Daniel Hartl, Andrew Clark (2007). Principles of Population Genetics, 4th edition, Sinauer Associates. ↑ Der R, Epstein CL, Plotkin JB (2011). Generalized population models and the nature of genetic drift. Theoretical Population Biology 80 (2): 80–99. ↑ Li, Wen-Hsiung; Dan Graur (1991). Fundamentals of Molecular Evolution, Sinauer Associates. ↑ 11.0 11.1 11.2 Gillespie, John H. (2001). Is the population size of a species relevant to its evolution?. Evolution 55 (11): 2161–2169. ↑ R.A. Neher and B.I. Shraiman (2011). Genetic Draft and Quasi-Neutrality in Large Facultatively Sexual Populations. Genetics 188 (4): 975–996. ↑ Warren Ewens (2004). Mathematical Population Genetics I. Theoretical Introduction, Springer-Verlag. ↑ Nicholas H. Barton, Derek E. G. Briggs, Jonathan A. Eisen, David B. Goldstein, Nipam H. Patel (2007). Evolution, Cold Spring Harbor Laboratory Press. ↑ Otto S, Whitlock M (1 June 1997). The Probability of Fixation in Populations of Changing Size. Genetics 146 (2): 723–33. ↑ Charlesworth B (March 2009). Fundamental concepts in genetics: Effective population size and patterns of molecular evolution and variation. Nat. Rev. Genet. 10 (3): 195–205. ↑ Asher D. Cutter and Jae Young Choi (2010). Natural selection shapes nucleotide polymorphism across the genome of the nematode Caenorhabditis briggsae. Genome Research 20 (8): 1103–1111. ↑ Hedrick, Philip W. (2004). Genetics of Populations, 737, Jones and Bartlett Publishers. ↑ Wen-Hsiung Li, Dan Graur (1991). Fundamentals of Molecular Evolution, Sinauer Associates. ↑ Kimura, Motoo (2001). Theoretical Aspects of Population Genetics, 232, Princeton University Press. ↑ Masel J, King OD, Maughan H (2007). The loss of adaptive plasticity during long periods of environmental stasis. American Naturalist 169 (1): 38–46. ↑ Natural Selection: How Evolution Works (An interview with Douglas Futuyma, see answer to question Is natural selection the only mechanism of evolution?). ActionBioscience.org. URL accessed on 2009-11-24. ↑ 25.0 25.1 25.2 Cavalli-Sforza, L. L. (1996). The history and geography of human genes, 413, Princeton, N.J.: Princeton University Press. ↑ Small KS, Brudno M, Hill MM, Sidow A (March 2007). Extreme genomic variation in a natural population. Proc. Natl. Acad. Sci. U.S.A. 104 (13): 5698–703. ↑ Golding B (1994). Non-neutral evolution: theories and molecular data, 46, Springer. ↑ Charlesworth B, Morgan MT, Charlesworth D (August 1993). The Effect of Deleterious Mutations on Neutral Molecular Variation. Genetics 134 (4): 1289–303. ↑ Presgraves DC (September 2005). Recombination enhances protein adaptation in Drosophila melanogaster. Curr. Biol. 15 (18): 1651–6. ↑ Nordborg M, Hu TT, Ishino Y, et al. (July 2005). The Pattern of Polymorphism in Arabidopsis thaliana. PLoS Biol. 3 (7): e196. ↑ Population Bottleneck | Macmillan Genetics ↑ Futuyma, Douglas (1998). Evolutionary Biology, 303–304, Sinauer Associates. ↑ O'Corry-Crowe G (2008). Climate change and the molecular ecology of arctic marine mammals. Ecological Applications 18 (2 Suppl): S56–S76. ↑ Cornuet JM, Luikart G (1996). Description and Power Analysis of Two Tests for Detecting Recent Population Bottlenecks from Allele Frequency Data. Genetics 144 (4): 2001–14. ↑ Hillis, David M. (2006). "Chs. 1, 21–33, 52–57" Life: The Science of Biology, 1251, San Francisco: W. H. Freeman. ↑ Evolution 101: Bottlenecks and Founder Effects. University of California, Berkeley. URL accessed on 2009-04-07. ↑ Neill, Campbell (1996). Biology; Fourth edition, The Benjamin/Cummings Publishing Company. ↑ Genetic Drift and the Founder Effect. Evolution. Public Broadcast System. URL accessed on 2009-04-07. ↑ Wade, Michael S.; Wolf, Jason; Brodie, Edmund D. (2000). Epistasis and the evolutionary process, Oxford [Oxfordshire]: Oxford University Press. ↑ Mayr, Ernst, Jody Hey, Walter M. Fitch, Francisco José Ayala (2005). Systematics and the Origin of Species: on Ernst Mayr's 100th anniversary, Illustrated, National Academies Press. ↑ Howard, Daniel J. (1998). Endless Forms, Illustrated, United States: Oxford University Press. ↑ Wright S (1929). The evolution of dominance. The American Naturalist 63 (689): 556–61. ↑ 43.0 43.1 Wright S (1955). Classification of the factors of evolution. Cold Spring Harb Symp Quant Biol 20: 16–24. ↑ 44.0 44.1 Stevenson, Joan C. (1991). Dictionary of Concepts in Physical Anthropology, Westport, Conn: Greenwood Press. ↑ Freeman and Herron. Evolutionary Analysis. 2007. Pearson Education, NJ. p.801 ↑ James F. Crow (2010). Wright and Fisher on Inbreeding and Random Drift. Genetics 184 (3): 609–611. ↑ Larson, Edward J. (2004). Evolution: The Remarkable History of a Scientific Theory, Modern Library. ↑ Avers, Charlotte (1989). Process and Pattern in Evolution, Oxford University Press. ↑ Kimura M (1968). Evolutionary rate at the molecular level. Nature 217 (5129): 624–26. ↑ Gillespie JH (2000). Genetic drift in an infinite population. The pseudohitchhiking model. Genetics 155 (2): 909–919. The TalkOrigins Archive Genetic drift illustrations in Barton et al. Drift vs. Draft Basic topics in evolutionary biology Processes of evolution: evidence - macroevolution - microevolution - speciation Mechanisms: selection - genetic drift - gene flow - mutation - phenotypic plasticity Modes: anagenesis - catagenesis - cladogenesis History: History of evolutionary thought - Charles Darwin - The Origin of Species - modern evolutionary synthesis Subfields: population genetics - ecological genetics - human evolution - molecular evolution - phylogenetics - systematics - evo-devo List of evolutionary biology topics | Timeline of evolution | Timeline of human evolution Topics in population genetics Key concepts: Hardy-Weinberg law | linkage disequilibrium | Fisher's fundamental theorem | neutral theory Selection: natural | sexual | artificial | ecological Genetic drift: small population size | population bottleneck | founder effect | coalescence Founders: Ronald Fisher | J.B.S. Haldane | Sewall Wright Related topics: evolution | microevolution | evolutionary game theory | fitness landscape List of evolutionary biology topics Template:Use dmy dates {{enWP|Genetic drift]] Retrieved from "https://psychology.wikia.org/wiki/Genetic_drift?oldid=142807"
CommonCrawl
Tag Archives: Sudoku Infinite Sudoku and the Sudoku game Posted on April 16, 2018 by Joel David Hamkins Consider what I call the Sudoku game, recently introduced in the MathOverflow question Who wins two-player Sudoku? posted by Christopher King (user PyRulez). Two players take turns placing numbers on a Sudoku board, obeying the rule that they must never explicitly violate the Sudoku condition: the numbers on any row, column or sub-board square must never repeat. The first player who cannot continue legal play loses. Who wins the game? What is the winning strategy? The game is not about building a global Sudoku solution, since a move can be legal in this game even when it is not part of any global Sudoku solution, provided only that it doesn't yet explicitly violate the Sudoku condition. Rather, the Sudoku game is about trying to trap your opponent in a maximal such position, a position which does not yet explicitly violate the Sudoku condition but which cannot be further extended. In my answer to the question on MathOverflow, I followed an idea suggested to me by my daughter Hypatia, namely that on even-sized boards $n^2\times n^2$ where $n$ is even, then the second player can win with a mirroring strategy: simply copy the opponent's moves in reflected mirror image through the center of the board. In this way, the second player ensures that the position on the board is always symmetric after her play, and so if the previous move was safe, then her move also will be safe by symmetry. This is therefore a winning strategy for the second player, since any violation of the Sudoku condition will arise on the opponent's play. This argument works on even-sized boards precisely because the reflection of every row, column and sub-board square is a totally different row, column and sub-board square, and so any new violation of the Sudoku conditions would reflect to a violation that was already there. The mirror strategy definitely does not work on the odd-sized boards, including the main $9\times 9$ case, since if the opponent plays on the central row, copying directly would immediately introduce a Sudoku violation. After posting that answer, Orson Peters (user orlp) pointed out that one can modify it to form a winning strategy for the first player on odd-sized boards, including the main $9\times 9$ case. In this case, let the first player begin by playing $5$ in the center square, and then afterwards copy the opponent's moves, but with the ten's complement at the reflected location. So if the opponent plays $x$, then the first player plays $10-x$ at the reflected location. In this way, the first player can ensure that the board is ten's complement symmetric after her moves. The point is that again this is sufficient to know that she will never introduce a violation, since if her $10-x$ appears twice in some row, column or sub-board square, then $x$ must have already appeared twice in the reflected row, column or sub-board square before that move. This idea is fully general for odd-sized Sudoku boards $n^2\times n^2$, where $n$ is odd. If $n=2k-1$, then the first player starts with $k$ in the very center and afterward plays the $2k$-complement of her opponent's move at the reflected location. On even-sized Sudoku boards, the second player wins the Sudoku game by the mirror copying strategy. On odd-sized Sudoku boards, the first players wins the Sudoku game by the complement-mirror copying strategy. Note that on the even boards, the second player could also play complement mirror copying just as successfully. What I really want to tell you about, however, is the infinite Sudoku game (following a suggestion of Sam Hopkins). Suppose that we try to play the Sudoku game on a board whose subboard squares are $\mathbb{Z}\times\mathbb{Z}$, so that the full board is a $\mathbb{Z}\times\mathbb{Z}$ array of those squares, making $\mathbb{Z}^2\times\mathbb{Z}^2$ altogether. (Or perhaps you might prefer the board $\mathbb{N}^2\times\mathbb{N}^2$?) One thing to notice is that on an infinite board, it is no longer possible to get trapped at a finite stage of play, since every finite position can be extended simply by playing a totally new label from the set of labels; such a move would never lead to a new violation of the explicit Sudoku condition. For this reason, I should like to introduce the Sudoku Solver-Spoiler game variation as follows. There are two players: the Sudoku Solver and the Sudoku Spoiler. The Solver is trying to build a global Sudoku solution on the board, while the Spoiler is trying to prevent this. Both players must obey the Sudoku condition that labels are never to be explicitly repeated in any row, column or sub-board square. On an infinite board, the game proceeds transfinitely, until the board is filled or there are no legal moves. The Solver wins a play of the game, if she successfully builds a global Sudoku solution, which means not only that every location has a label and there are no repetitions in any row, column or sub-board square, but also that every label in fact appears in every row, column and sub-board square. That is, to count as a solution, the labels on any row, column and sub-board square must be a bijection with the set of labels. (On infinite boards, this is a stronger requirement than merely insisting on no repetitions.) The Solver-Spoiler game makes sense in complete generality on any set $S$, whether finite or infinite. The sub-boards are $S^2=S\times S$, and one has an $S\times S$ array of them, so $S^2\times S^2$ for the whole board. Every row and column has the same size as the sub-board square $S^2$, and the set of labels should also have this size. Upon reflection, one realizes that what matters about $S$ is just its cardinality, and we really have for every cardinal $\kappa$ the $\kappa$-Sudoku Solver-Spoiler game, whose board is $\kappa^2\times\kappa^2$, a $\kappa\times\kappa$ array of $\kappa\times\kappa$ sub-boards. In particular, the game $\mathbb{Z}^2\times\mathbb{Z}^2$ is actually isomorphic to the game $\mathbb{N}^2\times\mathbb{N}^2$, despite what might feel initially like a very different board geometry. What I claim is that the Solver has a winning strategy in the all the infinite Sudoku Solver-Spoiler games, in a very general and robust manner. Theorem. For every infinite cardinal $\kappa$, the Solver has a winning strategy to win the $\kappa$-Sudoku Solver-Spoiler game. The strategy will win in $\kappa$ many moves, producing a full Sudoku solution. The Solver can win whether she goes first or second, starting from any legal position of size less than $\kappa$. The Solver can win even when the Spoiler is allowed to play finitely many labels at once on each turn, or fewer than $\kappa$ many moves (if $\kappa$ is regular), even if the Solver is only allowed one move each turn. In the countably infinite Sudoku game, the Solver can win even if the Spoiler is allowed to make infinitely many moves at once, provided only that the resulting position can in principle be extended to a full solution. Proof. Consider first the countably infinite Sudoku game, and assume the initial position is finite and that the Spoiler will make finitely many moves on each turn. Consider what it means for the Solver to win at the limit. It means, first of all, that there are no explicit repetitions in any row, column or sub-board. This requirement will be ensured since it is part of the rules for legal play not to violate it. Next, the Solver wants to ensure that every square has a label on it and that every label appears at least once in every row, every column and every sub-board. If we think of these as individual specific requirements, we have countably many requirements in all, and I claim that we can arrange that the Solver will simply satisfy the $n^{th}$ requirement on her $n^{th}$ play. Given any finite position, she can always find something to place in any given square, using a totally new label if need be. Given any finite position, any row and any particular label $k$, since can always find a place on that row to place the label, which has no conflict with any column or sub-board, since there are infinitely many to choose from and only finitely many conflicts. Similarly with columns and sub-boards. So each of the requirements can always be fulfilled one-at-a-time, and so in $\omega$ many moves she can produce a full solution. The argument works equally well no matter who goes first or if the Spoiler makes arbitrary finite play, or indeed even infinite play, provided that the play is part of some global solution (perhaps a different one each time), since on each move the Solve can simply meet the requirement by using that solution at that stage. An essentially similar argument works when $\kappa$ is uncountable, although now the play will proceed for $\kappa$ many steps. Assuming $\kappa^2=\kappa$, a consequence of the axiom of choice, there are $\kappa$ many requirements to meet, and the Solve can meet requirement $\alpha$ on the $\alpha^{th}$ move. If $\kappa$ is regular, we can again allow the Spoiler to make arbitrary size-less-than-$\kappa$ size moves, so that at any stage of play before $\kappa$ the position will still be size less than $\kappa$. (If $\kappa$ is singular, one can allow Spoiler to make finitely many moves at once or indeed even some uniform bounded size $\delta<\kappa$ many moves at once. $\Box$ I find it interesting to draw out the following aspect of the argument: Observation. Every finite labeling of an infinite Sudoku board that does not yet explicitly violate the Sudoku condition can be extended to a global solution. Similarly, any size less than $\kappa$ labeling that does not yet explicitly violate the Sudoku condition can be extended to a global solution of the $\kappa$-Sudoku board for any infinite cardinal $\kappa$. What about asymmetric boards? It has come to my attention that people sometimes look at asymmetric Sudoku boards, whose sub-boards are not square, such as in the six-by-six Sudoku case. In general, one could take Sudoku boards to consist of a $\lambda\times\kappa$ array of sub-boards of size $\kappa\times\lambda$, where $\kappa$ and $\lambda$ are cardinals, not necessarily the same size and not necessarily both infinite or both finite. How does this affect the arguments I've given? In the finite $(n\times m)\times (m\times n)$ case, if one of the numbers is even, then it seems to me that the reflection through the origin strategy works for the second player just as before. And if both are odd, then the first player can again play in the center square and use the mirror-complement strategy to trap the opponent. So that analysis will work fine. In the case $(\kappa\times\lambda)\times(\lambda\times\kappa)$ where $\lambda\leq\kappa$ and $\kappa=\lambda\kappa$ is infinite, then the proof of the theorem seems to break, since if $\lambda<\kappa$, then with only $\lambda$ many moves, say putting a common symbol in each of the $\lambda$ many rectangles across a row, we can rule out that symbol in a fixed row. So this is a configuration of size less than $\kappa$ that cannot be extended to a full solution. For this reason, it seems likely to me that the Spoiler can win the Sudoko Solver-Spoiler game in the infinite asymmetric case. Finally, let's consider the Sudoku Solver-Spoiler game in the purely finite case, which actually is a very natural game, perhaps more natural than what I called the Sudoku game above. It seems to me that the Spoiler should be able to win the Solver-Spoiler game on any nontrivial finite board. But I don't yet have an argument proving this. I asked a question on MathOverflow: The Sudoku game: Solver-Spoiler variation. Posted in Exposition, Math for Kids | Tagged games, infinite games, kids, Sudoku | 11 Replies Bi-interpretation in set theory, Bristol, February 2020 | Joel David Hamkins on Bi-interpretation in weak set theories The $Sigma_1$-definable universal finite sequence | Joel David Hamkins on A new proof of the Barwise extension theorem, without infinitary logic Joel David Hamkins on Lectures on the philosophy of mathematics, Oxford, Michaelmas term 2019 Klaus Loehnert on Lectures on the philosophy of mathematics, Oxford, Michaelmas term 2019 Joel David Hamkins on Climb into Cantor's attic JDH on Twitter A kind of library Barbara Gail Montero Benjamin Steinberg Cantor's Attic GC Math Department Global Set Theory Talks Gowers's weblog JDH on Wikipedia Mathblogging.org Peano's parlour Richard Zach Mathoverflow activity Comment by Joel David Hamkins on Proofs without words Oh yes, I've enjoyed many of your questions on MO over the years. I think you mean to refer to the impossibility of an integer isosceles right triangle, since clearly one can have isosceles triangles with integer sides, such as an equilateral unit triangle. Comment by Joel David Hamkins on Can $H_{\omega_1}$ and $H_{\omega_2}$ be in bi-interpretation synonymy? And thank you very much for the second part of your answer. I am definitely interested if you could track down a suitable reference for the claim about whether $\omega_2$ injects into $H_{\omega_1}$. This is enough for a positive answer to the first question, because one uses the well order to pick representatives, and then uses Shroeder-Bernstein to make a synonymy. Thanks! It seems to be this: doi.org/10.1016/0003-4843(77)90004-3. And the main theorem B is just what you say. That would do it! Kindly post an answer with the reference. Can $H_{\omega_1}$ and $H_{\omega_2}$ be in bi-interpretation synonymy? This question concerns the possibility of the bi-interpretation synonymy of the structure $\langle H_{\omega_1},\in\rangle$, consisting of the hereditarily countable sets, and the structure $\langle H_{\omega_2},\in\rangle$, consisting of sets of hereditary size at most $\aleph_1$. These are both models of Zermelo-Fraenkel set theory $\text{ZFC}^-$, without the power set axiom. The structure $\langle H_{\omega_1},\in\rangle$ is of course […] Comment by Joel David Hamkins on Seeing what gets Harvey Friedman's "tangible incompleteness" principles into large cardinal territory "...their guaranteed existence implies the existence of large cardinals" is not right, because no arithmetic statement can imply the existence of large cardinals. You mean to imply the consistency of the existence of the large cardinals. New York Logic Title TBA Resource Sharing Linear Logic, I Sub-Kripkean epistemic models I Generic logical semantics of justifications III Generic logical semantics of justifications II Generic logical semantics of justifications. Cut Elimination Functors and infinitary interpretations of structures Recursive Reducts of PA III Kripke completeness of strictly positive modal logics and related problems Cantor's Attic activity Subscribe to receive update notifications by email.
CommonCrawl
Application of response surface methodology for prediction and modeling of surface roughness in ball end milling of OFHC copper Asiful H. Seikh ORCID: orcid.org/0000-0003-1423-46281, Biplab Baran Mandal2, Amit Sarkar3, Muneer Baig4, Nabeel Alharthi1,5 & Bandar Alzahrani6 This study was conducted to investigate the synergistic effects of cutting parameters on surface roughness in ball end milling of oxygen-free high conductivity (OFHC) copper and to determine a statistical model that can suitably correlate the experimental results. Firstly, an experimental plan based on a full factorial rotatable central composite design with variable parameters, the cutting feed rate or feed per tooth, axial depth of cut, radial depth of cut, and the cutting speed, was developed. The range for each variable was varied through five different levels. Secondly, a mathematical model was formulated based on the response surface methodology (RSM) for roughness components (Ra and Rz micron). The predicted values from the model were found to be close to the actual experimental values. Finally, for checking the adequacy of the models, analysis of variance (ANOVA) was used to examine the dependence of the process parameters and their interactions. The developed model would assist in selecting the cutting variables for optimization of ball end milling process for a particular material. Based on the results from this study, it is concluded that the step over or radial depth of cut have a higher contribution (45.81%) and thus has a significant influence on the surface roughness of the milled OFHC copper. Oxygen-free high conductivity (OFHC) Cu is a pure form of Cu with 99.99% Cu and is widely used in electrical applications such as cryogenic shunts, X-ray storage ring, and various other industries for different applications (Mahto and Kumar, 2008; Yang and Chen, 2001; Zhang, Chen, and Kirby, 2007). Presently, the demand for good quality of finished OFHC Cu material (like a mirror finish surface) is increasing at a brisk pace for its use in various sectors, like manufacturing, electrical, electronics, nuclear, and medical science (Mahto and Kumar, 2008; Yang and Chen, 2001; Zhang et al. 2007). To achieve a good quality of surface finished products, the selection of proper process parameters are important and essential (Yang and Chen, 2001). Among the several metal cutting operations, end milling has been a vital, common, and widely used process for machining parts in numerous applications including aerospace, automotive, and several manufacturing industries (Mahto and Kumar, 2008; Zhang et al. 2007). It is well known that the surface roughness is an important parameter in the machining process (Makadia and Nanavati, 2013). Usually, the product quality is measured by its surface roughness. Minimizing the surface roughness results in a product with good surface finish of the final machined part. Thus, researchers have directed their attention toward developing models and quantifying the relationship between roughness and its parameters. The determination of this relationship is for the advancement in manufacturing machines, materials technology, and the availability of modeling techniques. The different methods include that confined in this approach response surface method (RSM), factorial designs, and Taguchi methods (Lin, 1994). Recently, these are the most popular methods used by researchers that tend to reduce the effort of a machinist and minimize the machining time and cost which was not possible by the old experimental approach that includes single factor at a time or "trial-and-error" approach (Lin, 1994). Among the various approaches used to predict the surface roughness, the present article demands a brief review of roughness modeling using RSM. Alauddin et al. (Alauddin, El Baradie, and Hashmi, 1996) presented their work on optimizing the surface finish of Inconel 718 in end milling. They used uncoated carbide inserts under dry operating conditions. The RSM was used to develop a first- and second-order models, and based on the results, it was concluded that with the increase in feed surface roughness, increases cutting speed but increasing speed results in a decrease in the surface roughness. Suresh et al. (Suresh, Rao, and Deshmukh, 2002) proposed a model dependent on the machining parameters for measuring the surface roughness of material and later optimized the parameters using a generic algorithm. Routara et al. (Routara, Bandyopadhyay, and Sahoo, 2009) proposed a roughness model for end milling of three different materials: Al 6061-T4, AISI 1040 steel, and medium-leaded brass UNS C34000. The study included five roughness parameters, and for each behavior, a second-order response surface equation was developed. Benadros et al. (Benardos and Vosniakos, 2002) presented a review for surface roughness prediction in the machining process. The different approaches reviewed were based on machining, experimental design and investigation, and artificial intelligence. Colak et al. (Colak, Kurbanoglu, and Kayacan, 2007) optimized roughness parameters using a generic algorithm for generating end milled surface. A linear equation was proposed for the estimation of the surface roughness that was in terms of parameters such as cutting speed, feed, and depth of cut. Lakshmi et al. (Lakshmi and Subbaiah, 2012) used RSM for modeling and optimization of the end milling process parameters. Average surface roughness for the EN24 grade steel stands for CNC vertical machining center. In addition, the second-order model was developed based on the feed, depth of cut, and the speed of cutting. It was shown that the predicted value from the model was in close agreement with the experimental values for Ra. Jeyakumar et al. (Jeyakumar and Marimuthu, 2013) used RSM to predict the tool wear, cutting force, and surface roughness of Al6061/SiC composite in end milling operation. The developed model was further used to investigate the synergistic effect of machining parameters on the tool wear. Ozcelik et al. (Ozcelik and Bayramoglu, 2006) developed a statistical model to predict the surface roughness in high-speed flat end milling of AISI 1040 steel. The experiments were performed under wet cutting conditions using step over, spindle speed, feed rate, and depth of cut. It was found that R2adj increases from 87.9 to 94% by adding total machining time as a new variable. Mansour and Abdalla (Mansour and Abdalla, 2002) studied the roughness (Ra) in end milling of EN 32 steel using RSM. Wang et al. (Wang and Chang, 2004) studied the effect of micro-end-milling cutting conditions on the roughness of a brass surface using RSM. Reddy and Rao (Reddy and Rao, 2005) developed a mathematical model using RSM to calculate surface roughness during end milling of medium carbon steel. Based on the literature presented above, it reflects that there are mainly four machining parameters that effect on the surface roughness of end milled parts. Thus, in the present study, two roughness parameters viz. roughness average (Ra) and mean roughness depth (Rz) was considered as responses for generating stata istical predictive model in terms of machining parameters. The machine used for milling tests is 'MIKRON VCP710' CNC machining center having the control system Heidenhain TNC430 MHS with a vertical milling head. The maximum spindle speed and work feed rate of the machine is 18,000 rpm and 15 m/min respectively. Visicam 15 for drawing and tool path generation is used. The experimental setup used in this study is shown in Fig. 1. The cutting tool used in the present work was a solid carbide ball nose end mill cutter. The tool has a cutter diameter of 8 mm; overall length, 63 mm; fluted length, 45 mm; helix angle, 30o; hardness is less than 48 HRC number of flutes, 2. It was produced by "Sandvik" Coromill Plura (CoroMill Plura solid carbide end mills tool handbook "Sandvik Coromont," n.d.). Surface roughness was measured using Surftest SJ-301. The Surftest SJ-301 is set to a cut-off length (λc) of 0.8 mm, maximum traverse speed of 0.5 mm/s, and an evaluation length of 4 mm. Stylus material is a diamond having a tip radius of 5 μm. The roughness tester is shown in Fig. 1. Surface roughness was measured in the transverse direction on the workpiece. Experimental setup for measurement of surface roughness using Mitutoyo Surftest SJ-301 Workpiece material The present study was performed on the OFHC Cu. The chemical composition of the OFHC copper used is shown in Table 1. The dimension of the specimen was 145 mm × 90 mm × 38 mm. Table 1 Chemical composition of OFHC Cu (wt%) The RSM technique is based on the statistical and mathematical (least-square fitting method) approach for modeling and analysis of the problems where the response is influenced by several parametric variables. The RSM can be considered as a systematic approach to find he relationship between various machining criteria and process parameters (Montgomery, 2005). The design of the experiment was performed on Minitab 17 software package. Minitab is designed for analysis of data obtained from the following steps: Choose the number of process parameters taken for the experiment. Select the appropriate model to be used. ANOVA for analysis to check the adequacy of the model. Use proper elimination process stepwise, backward or forward elimination. Inspect the diagnostic plots to validate the model statistically. Steps (2) and (3) helps in identifying if the model is appropriate followed by generating model graphs (contour and 3D graphs) for interpretation. In RSM, the initial step is to find an approximation for the functional relationship between the response (y) and controllable variables {x1, x2, … , xn}. This relationship in terms of input parameters is written as (Montgomery, 2005): $$ y=\left({x}_1,{x}_2,\dots \dots, {x}_n\right)+\varepsilon $$ When the response function is non-linear or, there exist a curvature in the system, then a higher degree must be used, such as the second-order model; $$ {\displaystyle \begin{array}{cccc}y=a0+{\sum}^k& {a}_j{X}_j+{\sum}^k& {a}_{jj}{X}_j^2+{\sum}^{k-1}{\sum}^k& {a}_{ij}{X}_i{X}_j+\varepsilon \\ {}j=1& j=1& i& j\end{array}} $$ Where, i = 1, 2, … , k − 1 and j = 1, 2, … , k with i < j, a0 free term or constant term of the regression equation, the coefficients a1, a2, . . , ak and a11, a22, . . , akk are the linear and the quadratic terms respectively; while a12, a13, . . , ak − 1 are the interaction terms. X's represents input parameters (f, ap, ae, and Vc). The output surface roughness components (Ra and Rz) are also called the response factors, and ε represents the noise or error observed in the response (y) (Montgomery, 2005). The RSM fit a model used to apply the least-square technique. However, the calculated coefficients should be tested for statistical significance. The data required to develop the computation was collected from the design of experiments based on rotatable central composite design (RCCD) (Box and Hunter, 1957) and by varying each numeric factor over five levels coded as −2, −1, 0, 1, and +2. The coded values were calculated using the following relationship in Eq. (3): $$ {X}_i=\frac{2\left(2X-\left({X}_{\mathrm{max}}+{X}_{\mathrm{min}}\right)\right)}{X_{\mathrm{max}}-{X}_{\mathrm{min}}} $$ Where Xi is the required coded value of a variable X, X is any value of the variable from Xmin to Xmax, Xmin is the lower limit of the variable. The intermediate values are coded as −1, 0, and 1. The selected parameters with their level are shown in Table 2. The experimental design consists of 31 runs as with the experimental results are outlined in Table 3. Table 2 Parameters and their levels in ball nose milling Table 3 Experimental design-CCD matrix in coded form and measured value of responses Table 3 presents experimental results of the responses (Ra and Rz), also the predicted values obtained from rethe gression equation and percentage error. The surface roughness (Ra) and mean roughness depth (Rz) values are obtained in the range of (0.17–0.70 μm) and (1.29–3.680 μm) respectively. Figures 2, 3, 4, and 5 present the normal probability plots of residuals and plot of residuals vs. predicted response for both roughness components (Ra, Rz). Normal probability plot of residuals for Ra Plot of residuals vs. predicted response for Ra Normal probability plot of residuals for Rz Plot of residuals vs. predicted response for Rz The ANOVA for the surface roughness components Ra and Rz are obtained from Minitab 17 statistical software. These components were used to analyze the influence of cutting speed, feed per tooth, axial depth of cut, and radial depth of cut on the experimental results. Tables 4 and 5 show the ANOVA result of roughness components Ra and Rz respectively. The results from ANOVA and the F ratio were used to check the adequacy of the models as well as to show the significance of the individual model coefficients. Table 4 Analysis of variance for mean roughness depth Ra (µm) (reduced quadratic model) Table 5 Analysis of variance for mean roughness depth Rz (μm) (reduced quadratic model) Table 4 shows ANOVA for roughness average (Ra). From this table, it can be seen that all the linear, square terms are significant, and the two-way interaction effect of cutting feed rate and radial depth of cut (fz × ae), cutting feed rate and cutting speed (fz × Vc), axial depth of cut and radial depth of cut (ap × ae), step over, and cutting speed (ae × Vc) are regarded as significant terms. Except two-level inter foraction, the effects of fzap and apVc are becoming insignificant as their p value is greater than 0.05, thus are not included in the final quadratic model. Table 4 shows that coefficient of correlation R2 is 99.86% which approaches to unity; this indicates a close correlation between the experimental and the predicted values as shown in Fig. 2. A check on the plots in Figs. 2 and 4 reveals that the scatter of residuals are very close to the straight line implying the normal distribution of errors. Moreover, the scattered data in Figs. 3 and 5 revealed that there is no obvious pattern and formed unusual structure. This shows a good relation between residual and fit values. The comparison of F ratio for lack-of-fit and standard values are presented in Table 4 corresponding to their degrees of freedom. The standard percentage point of F distribution for 95% confidence level is shown in Table 6. This shows the F value (2.39) for lack-of-fit is smaller than the standard value indicating that the proposed model is adequate. The analysis of results shows that the effect of feed per tooth (fz) has a significant influence on the surface roughness with a 45.81% contribution to the model because surface finish increases as step over decreases. Axial depth of cut is the next dominant factor with a contribution of 1.78%. Cutting speed (Vc) with 0.07% contribution has the lowest effect on the surface roughness in ball end milling of OFHC Cu material. Table 6 Results of ANOVA analysis Similarly, Table 5 shows the ANOVA table for the mean roughness depth (Rz). It is found that the radial depth of cut or step over (ae) is the significant factor affecting Rz. Its contribution is 16.78%. F = 3.22 < 3.96 (F0.05,14,6 = 3.96) for lack-of-fit DF is given in Table 6 and shows that lack-of-fit is insignificant thus model for Rz is also adequate. The next largest factor influencing (Rz) is axial depth of cut (ae) with a contribution of 4.57%. The cutting speed (Vc) with 0.48% contribution has a poor weak significant effect. The two-way interaction terms fz × ap, fz × Vc, ap × ae, and apVc are not significant as their p value being less than 0.05. Regression equation The relationship between the factors and the performance measures are modeled by quadratic regression. The regression equations for both the roughness components are formed by performing a backward elimination process. This procedure automatically reduces the terms that are not significant. The roughness average Ra model is given below in Eq. (4). $$ {\displaystyle \begin{array}{l}R=0.695+0.0145{f}_z-0.0229{a}_p+0.116{a}_e+0.0045{V}_c-0.06{f}_z^2-0.053{a}_p^2\\ {}-0.075{a}_e^2-0.0638{V}_c^2-0.021{f}_za-0.0106{f}_z{V}_c-0.0043{a}_p{a}_e-0.0056{a}_e{V}_c\end{array}} $$ The mean roughness depth (Rz) is given by Eq. (5) with a determination coefficient (R2) of 82.14%. Rz = 3.652 + 0.11fz − 0.176ap + 0.338ae + 0.057Vc − 0.356fz2 − 0.338ap2 − 0.428ae2 − 0.381Vc2 − 0.041fz + 0.037aeVc The predicted values of responses illustrating roughness average (Ra) and mean roughness depth (Rz) from regression Eqs. (4) and (5) corresponding to different combinations of machining parameters are reported in Table 3. Figures 6 and 7 show the comparison of predicted values and the corresponding experimental values. It is observed that the predicted values are in close agreement with the experimental calculations. The comparison between measured and predicted value for the roughness value (Ra) The comparison between measured and predicted value for the roughness value (Rz) 3D surface and contour plots The 3D surface graphs and contours for the surface roughness components (Ra and Rz) are shown in Figs. 8 and 9. All the surface graphs have a curvilinear profile corresponding to the quadratic model fitted. This means all plot of interactions for surface roughness have a significant effect. 3D surface interaction effect of the input parameters Interaction effect contour plots for Ra From the surface plot, it is depicted that with a setting of low radial depth of cut and near to higher level of axial depth of cut, cutting feed rate, and cutting speed, a good surface finish can be obtained. Confirmation test Figures 6 and 7 illustrate the variation between measured values and predicted responses. It can be seen that the results of the comparison are in close agreement with each other and can predict the values of surface roughness components (Ra and Rz) accurately with a 95% confidence interval. The present study successfully demonstrated the effect of cutting speed, depth of cut, feed per tooth, and step over on surface roughness in end milling of OFHC copper using a solid carbide ball nose end milling cutter. The following conclusions were derived from the study: Surface roughness analysis using RSM was successfully carried out. It was concluded that the systematic approach in central composite design is beneficial as it saves a number of experimentations required. Using the principles of response surface methodology, a functional relationship between the surface roughness and the cutting parameters is established. Quadratic model is fitted for both the roughness components (Ra and Rz). ANOVA tests result confirmed that models are adequate and can be adapted to mill OFHC Cu for achieving the desired surface finish. Comparison between actual and predicted values confirmed that the fitted quadratic model shows a good relational behavior. Lack-of-fit was insignificant. The surface roughness model suggests that the radial depth of cut provides primary contribution (45.81%) and influences most significantly on the surface roughness. Axial depth of cut provided a secondary contribution to the model followed by cutting feed rate and cutting speed. The obtained contours and surface plots will help on selecting the optimum cutting parameters in order to achieve higher surface finish. Alauddin, M., El Baradie, M. A., & Hashmi, M. S. J. (1996). Optimization of surface finish in end milling of Inconel 718. Journal of Materials Processing Technology, 56, 54–65. Benardos, P. G., & Vosniakos, G. C. (2002). Prediction of surface roughness in CNC face milling using neural networks and Taguchi's design of experiments. Robotics and Computer Integrated Manufacturing, 18, 343–354. Box, G. E., & Hunter, J. S. (1957). Multifactor experimental designs, Ann Math Stat (pp. 28–195). Colak, O., Kurbanoglu, C., & Kayacan, M. C. (2007). Milling surface roughness prediction using evolutionary programming methods. Materials and Design, 28, 657–666. CoroMill Plura Solid carbide end mills, Tool Handbook 2018 "SANDVIK Coromont"). Jeyakumar, S., Marimuthu, K., & RamachandranT. (2013). Prediction of cutting force, tool wear and surface roughness of Al6061/SiC composite for end milling operation using RSM. Journal of Mechanical Science and Technology, 27(9), 2813–2822. Lakshmi, V. V. K., & Subbaiah, V. K. (2012). Modelling and optimization of process parameters during end milling of hardened steel. International Journal of Engineering Research and Applications, 2, 674–679. Lin, S. C. (1994). Computer numerical control—From programming to networking. Albany: Delmar. Mahto, D., & Kumar, A. (2008). Optimization of process parameters in vertical CNC mill machines using Taguchi's Design of Experiments. Ariser, 4(2), 61–75. Makadia, J. A., & Nanavati, J. I. (2013). Optimization of machining parameters for turning operations based on response surface methodology. Elsevier Measurement, 46, 1521–1529. Mansour, A., & Abdalla, H. (2002). Surface roughness model for end milling: A semi- free cutting carbon case hardening steel (EN32) in dry condition. Journal of Materials Processing Technology, 124, 183–191. Montgomery, D. C. (2005). Design and analysis of experiments. New York: Wiley. Ozcelik, B., & Bayramoglu, M. (2006). The statistical modelling of surface roughness in high-speed flat end milling. International Journal of Machine Tools and Manufacture, 46, 1395–1402. Reddy, N. S. K., & Rao, P. V. (2005). Selection of optimum tool geometry and cutting conditions using surface roughness prediction model for end milling. International Journal of Advanced Manufacturing Technology, 26, 1202–1210. Routara, B. C., Bandyopadhyay, A., & Sahoo, P. (2009). Roughness modeling and optimization in CNC end milling using response surface method: Effect of work-piece material variation. International Journal of Advanced Manufacturing Technology, 40, 1166–1180. Suresh, P. V. S., Rao, P. V., & Deshmukh, S. G. (2002). A genetic algorithmic approach for optimization of surface roughness prediction model. International Journal of Machine Tools and Manufacture, 42, 675–680. Wang, M. Y., & Chang, H. Y. (2004). Experimental study of surface roughness in slot end milling AL2014-T6. International Journal of Machine Tools and Manufacture, 44, 51–57. Yang, L. J., & Chen, C. J. (2001). A systematic approach for identifying optimum surface roughness performance in end-milling operations. Journal of Industrial Technology, 17(2), 1–8. Zhang, Z. J., Chen, J. C., & Kirby, D. E. (2007). Surface roughness optimization in an end- milling operation using the Taguchi design method. Journal of Materials Processing Technology, 184, 233–239. The authors thank Manufacturing Technology Group (MTG) Lab of CISR-CMERI Durgapur, West Bengal, India, 713209 for providing experimental setup and all necessary equipment's required during experimentation. The authors would like to extend their sincere appreciation to the Deanship of Scientific Research at King Saud University for its funding of this research through the Research Group Project No. RG-1439-029. The authors declare that on acceptance of the manuscripts for publication the data used for the work will be available to all concerned. The will be interesting for both scientific and industrial purpose especially to all Cu industries. Centre of Excellence for Research in Engineering Materials, King Saud University, P.O. Box - 800, Riyadh, 11421, Saudi Arabia Asiful H. Seikh & Nabeel Alharthi Department of Mechanical Engineering, ISM, Dhanbad, Jharkhand, 826004, India Biplab Baran Mandal Department of Metallurgical Engineering, Jadavpur University, Kolkata, India Amit Sarkar Engineering Management Department, College of Engineering, Prince Sultan University, Riyadh, Saudi Arabia Muneer Baig Mechanical Engineering Department, College of Engineering, King Saud University, P.O. Box - 800, Riyadh, 11421, Saudi Arabia Nabeel Alharthi Mechanical Engineering Department, Prince Sattam Bin Abdulaziz University, Al Kharj, Saudi Arabia Bandar Alzahrani Asiful H. Seikh BBM contributed to conceptualization and validation. BBM and AS helped in the data creation, investigation and methodology, and project administration and resources. BBM, AS, and AHS helped in the formal analysis. AHS and NA acquired the funding. BA, AHS, and NA supervised the study. BBM, AS, AHS, BA, MB, and NA helped in writing—original draft. BA, MB, AHS, and NA contributed to writing—review and editing. All authors read and approved the final manuscript. Correspondence to Asiful H. Seikh. Seikh, A.H., Mandal, B.B., Sarkar, A. et al. Application of response surface methodology for prediction and modeling of surface roughness in ball end milling of OFHC copper. Int J Mech Mater Eng 14, 3 (2019). https://doi.org/10.1186/s40712-019-0099-0 OFHC copper End milling
CommonCrawl
Control of attractors in nonlinear dynamical systems using external noise: Effects of noise on synchronization phenomena PROC Home Analysis of a mathematical model for jellyfish blooms and the cambric fish invasion 2013, 2013(special): 673-684. doi: 10.3934/proc.2013.2013.673 Stochastic heat equations with cubic nonlinearity and additive space-time noise in 2D Henri Schurz 1, Southern Illinois University, Department of Mathematics, MC 4408, 1245 Lincoln Drive, Carbondale, IL 62901-7316 Received September 2012 Revised March 2013 Published November 2013 Semilinear heat equations on rectangular domains in $\mathbb{R}^2$ (conduction through plates) with cubic-type nonlinearities and perturbed by an additive Q-regular space-time white noise are considered analytically. These models as 2nd order SPDEs (stochastic partial differential equations) with non-random Dirichlet-type boundary conditions describe the temperature- or substance-distribution on rectangular domains as met in engineering and biochemistry. We discuss their analysis by the eigenfunction approach allowing us to truncate the infinite-dimensional stochastic systems (i.e. the SDEs of Fourier coefficients related to semilinear SPDEs), to control its energy, existence, uniqueness, continuity and stability. The functional of expected energy is estimated at time $t$ in terms of system-parameters. Keywords: Gaussian noise, Q-regular space-time noise, Semilinear heat equations, Wiener process, SPDEs, cubic nonlinearity, trace formula., Fourier solutions, Lyapunov functions, total energy functional, conservation laws, approximating Fourier coefficients. Mathematics Subject Classification: Primary: 34F05, 37H10, 60H10, 60H30; Secondary: 65C3. Citation: Henri Schurz. Stochastic heat equations with cubic nonlinearity and additive space-time noise in 2D. Conference Publications, 2013, 2013 (special) : 673-684. doi: 10.3934/proc.2013.2013.673 E. Allen, "Modeling with Stochastic Differential Equations,", Springer-Verlag, (2007). Google Scholar L. Arnold, "Stochastic Differential Equations,", John Wiley & Sons, (1974). Google Scholar A. Bensoussan and R. Temam, Équations aux dérivées partielles stochastiques non linéaires. I. (in French),, Israel J. Math., 11 (1972), 95. Google Scholar A. Bensoussan, Some existence results for stochastic partial differential equations,, in Stochastic partial differential equations and applications, (1990), 37. Google Scholar P.L. Chow, "Stochastic Partial Differential Equations,", Chapman & Hall/CRC, (2007). Google Scholar G. Da Prato and J. Zabczyk, "Stochastic Equations in Infinite Dimensions,", Cambridge University Press, (1992). Google Scholar G. Da Prato and J. Zabzcyk, "Ergodicity for Infinite Dimensional Systems,", Cambridge University Press, (1996). Google Scholar L.C. Evans, "Partial Differential Equations,", AMS, (2010). Google Scholar T.C. Gard, "Introduction to Stochastic Differential Equations,", Marcel Dekker, (1988). Google Scholar W. Grecksch and C. Tudor, "Stochastic Evolution Equations. A Hilbert space approach,", Akademie-Verlag, (1995). Google Scholar A.L. Hodgkin and W.A.H. Rushton, The electrical constants of a crustacean nerve fibre,, Proc. Roy. Soc. London. B 133 (1946) 444-479., 133 (1946), 444. Google Scholar R.Z. Khasminskiĭ, "Stochastic Stability of Differential Equations,", Sijthoff & Noordhoff, (1980). Google Scholar C. Koch, "Biophysics of Computation: Information Processing in Single Neurons,", Oxford U. Press, (1999). Google Scholar C. Koch and I. Segev, "Methods in Neuronal Modeling: From Ions to Networks (2-nd edition),", MIT Press, (1998). Google Scholar E. Pardoux, Équations aux dérivées partielles stochastiques non linéaires monotones,, PhD. Thesis, (1975). Google Scholar E. Pardoux, Stochastic partial differential equations and filtering of diffusion processes,, Stochastics 3 (1979), (1979), 127. Google Scholar B.L. Rozovskii, "Stochastic Evolution Systems,", Kluwer, (1990). Google Scholar H. Schurz, "Stability, Stationarity, and Boundedness of Some Implicit Numerical Methods for Stochastic Differential Equations and Applications'',, Logos-Verlag, (1997). Google Scholar H. Schurz, Nonlinear stochastic wave equations in $\mathbbR^1$ with power-law nonlinearity and additive space-time noise,, Contemp. Math., 440 (2007), 223. Google Scholar H. Schurz, Existence and uniqueness of solutions of semilinear stochastic infinite-dimensional differential systems with H-regular noise,, J. Math. Anal. Appl., 332 (2007), 334. Google Scholar H. Schurz, Analysis and discretization of semi-linear stochastic wave equations with cubic nonlinearity and additive space-time noise,, Discrete Contin. Dyn. Syst. Ser. S, 1 (2008), 353. Google Scholar H. Schurz, Nonlinear stochastic heat equations with cubic nonlinearities and additive Q-regular noise in $\mathbbR^1$,, Electron. J. Differ. Equ. Conf., 19 (2010), 221. Google Scholar A.N. Shiryaev, "Probability,", Springer-Verlag, (1996). Google Scholar G.J. Stuart and B. Sakmann, Active propagation of somatic action potentials into neocortical pyramidal cell dendrites,, Nature 367 (1994) 69-72., 367 (1994), 69. Google Scholar H.C. Tuckwell and J.B. Walsh, Random currents through nerve membranes. I. Uniform poisson or white noise current in one-dimensional cables,, Biol. Cybern., 49 (1983), 99. Google Scholar C. Tudor, On stochastic evolution equations driven by continuous semimartingales,, Stochastics 23 (1988), 23 (1988), 179. Google Scholar J.B. Walsh, An introduction to stochastic partial differential equations,, Lecture Notes in Math., 1180 (1986), 265. Google Scholar J.B. Walsh, Finite element methods for parabolic stochastic PDE's,, Potential Anal., 23 (2005), 1. Google Scholar Henri Schurz. Stochastic wave equations with cubic nonlinearity and Q-regular additive noise in $\mathbb{R}^2$. Conference Publications, 2011, 2011 (Special) : 1299-1308. doi: 10.3934/proc.2011.2011.1299 Henri Schurz. Analysis and discretization of semi-linear stochastic wave equations with cubic nonlinearity and additive space-time noise. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 353-363. doi: 10.3934/dcdss.2008.1.353 Yanzhao Cao, Li Yin. Spectral Galerkin method for stochastic wave equations driven by space-time white noise. Communications on Pure & Applied Analysis, 2007, 6 (3) : 607-617. doi: 10.3934/cpaa.2007.6.607 Ying Hu, Shanjian Tang. Nonlinear backward stochastic evolutionary equations driven by a space-time white noise. Mathematical Control & Related Fields, 2018, 8 (3&4) : 739-751. doi: 10.3934/mcrf.2018032 Boris P. Belinskiy, Peter Caithamer. Energy of an elastic mechanical system driven by Gaussian noise white in time. Conference Publications, 2001, 2001 (Special) : 39-49. doi: 10.3934/proc.2001.2001.39 Tianlong Shen, Jianhua Huang, Caibin Zeng. Time fractional and space nonlocal stochastic boussinesq equations driven by gaussian white noise. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1523-1533. doi: 10.3934/dcdsb.2018056 Guangying Lv, Hongjun Gao. Impacts of noise on heat equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5769-5784. doi: 10.3934/dcdsb.2019105 Ionuţ Munteanu. Design of boundary stabilizers for the non-autonomous cubic semilinear heat equation driven by a multiplicative noise. Evolution Equations & Control Theory, 2019, 0 (0) : 0-0. doi: 10.3934/eect.2020034 Montgomery Taylor. The diffusion phenomenon for damped wave equations with space-time dependent coefficients. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5921-5941. doi: 10.3934/dcds.2018257 Dong-Ho Tsai, Chia-Hsing Nien. On space-time periodic solutions of the one-dimensional heat equation. Discrete & Continuous Dynamical Systems - A, 2019, 0 (0) : 0-0. doi: 10.3934/dcds.2020037 Yan Wang, Lei Wang, Yanxiang Zhao, Aimin Song, Yanping Ma. A stochastic model for microbial fermentation process under Gaussian white noise environment. Numerical Algebra, Control & Optimization, 2015, 5 (4) : 381-392. doi: 10.3934/naco.2015.5.381 Adam Andersson, Felix Lindner. Malliavin regularity and weak approximation of semilinear SPDEs with Lévy noise. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4271-4294. doi: 10.3934/dcdsb.2019081 Georgios T. Kossioris, Georgios E. Zouraris. Finite element approximations for a linear Cahn-Hilliard-Cook equation driven by the space derivative of a space-time white noise. Discrete & Continuous Dynamical Systems - B, 2013, 18 (7) : 1845-1872. doi: 10.3934/dcdsb.2013.18.1845 Boris P. Belinskiy, Peter Caithamer. Energy estimate for the wave equation driven by a fractional Gaussian noise. Conference Publications, 2007, 2007 (Special) : 92-101. doi: 10.3934/proc.2007.2007.92 Norisuke Ioku. Some space-time integrability estimates of the solution for heat equations in two dimensions. Conference Publications, 2011, 2011 (Special) : 707-716. doi: 10.3934/proc.2011.2011.707 Min Niu, Bin Xie. Comparison theorem and correlation for stochastic heat equations driven by Lévy space-time white noises. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 2989-3009. doi: 10.3934/dcdsb.2018296 Igor Kukavica. On Fourier parametrization of global attractors for equations in one space dimension. Discrete & Continuous Dynamical Systems - A, 2005, 13 (3) : 553-560. doi: 10.3934/dcds.2005.13.553 María J. Garrido-Atienza, Bohdan Maslowski, Jana Šnupárková. Semilinear stochastic equations with bilinear fractional noise. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 3075-3094. doi: 10.3934/dcdsb.2016088 Kei Nakamura, Tohru Ozawa. Finite charge solutions to cubic Schrödinger equations with a nonlocal nonlinearity in one space dimension. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 789-801. doi: 10.3934/dcds.2013.33.789 Yuming Zhang. On continuity equations in space-time domains. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 4837-4873. doi: 10.3934/dcds.2018212 Impact Factor: Henri Schurz
CommonCrawl
Algebra Formulas For Class 8 Algebra Formulas For Class 10 Algebraic Expressions formula Area and Perimeter Formulas Area of a Triangle Formula Area of a Circle Formula Area of a Square Formula Area of Equilateral triangle formula Area of a Cylinder formula Rhombus Formula Area of a Rhombus Formula Perimeter of Rhombus Formula Sin cos formula Cos Inverse Formula Sin Theta formula Tan2x formula Tan Theta Formula Tangent 3 Theta Formula Trigonometric Functions formulas Exponential formula Differential Equations formula Pi Formulas Quadrilateral Formula Set Formulas Sequence and Series Formulas Selling Price Formula Basic Math Formulas Chemistry Formulas Chemical Compound Formulas Skewness Formula Skewness formula is called so because the graph plotted is displayed in skewed manner. Skewness is a measure used in statistics that helps reveal the asymmetry of a probability distribution. It can either be positive or negative, irrespective of signs. To calculate the skewness, we have to first find the mean and variance of the given data. The formula is: \[\large g=\sqrt{\frac{\sum_{i-1}^{n}\left(x-x_{i}\right)^{3}}{\left(n-1\right)s^{3}}}\] x is the observations $x_{i}$ is the mean n is the total number of observations s is the variance Solved example Question. Find the skewness in the following data. Height (inches) Class Marks Frequency 59.5 – 62.5 61 5 62.5 – 65.5 64 18 To know how skewed these data are as compared to other data sets, we have to compute the skewness. Sample size and sample mean should be found out. N = 5 + 18 + 42 + 27 + 8 = 100 $\overline{x}=\frac{\left(61\times 5\right)+\left(64\times 18\right)+\left(67\times 43\right)+\left(70\times 27\right)+\left(73\times 8\right)}{100}$ $\overline{x}=\frac{6745}{100}=67.45$ Now with the mean we can compute the skewness. Class Mark, x Frequency, f xf $\left(x-\overline{x}\right)$ $\left(x-\overline{x}\right)^{2}\times f$ $\left(x-\overline{x}\right)^{3}\times f$ 61 5 305 -6.45 208.01 -1341.68 64 18 1152 -3.45 214.25 -739.15 67 42 2814 -0.45 8.51 -3.83 70 27 1890 2.55 175.57 447.70 73 8 584 5.55 246.42 1367.63 6745 n/a 852.75 -269.33 67.45 n/a 8.5275 -2.6933 Now, the skewness is $g_{i}=\sqrt{\frac{\sum_{i=1}^{n}\left(x-x_{i}\right)^{3}}{\left(n-1 \right)s^{3}}}=-\frac{2.6937}{8.5275^{\frac{3}{2}}}=-0.1082$ For interpreting we have the folowing rules as per Bulmer in the year 1979: If the skewness comes to less than -1 or greater than +1, the data distribution is highly skewed If the skewness comes to between -1 and $-\frac{1}{2}$ or between $+\frac{1}{2}$ and +1, the data distribution is moderately skewed. If the skewness is between $-\frac{1}{2}$ and $+\frac{1}{2}$,the distribution is approximately symmetric Which of the following factors influence the Hardy-Weinberg equilibrium? i. Gene migration ii. Genetic drift iii. Mutation iv. Reproduction v. Genetic recombination Only i, ii, iii and v Only ii, iii, and iv Only iii, iv and v Only i, ii, iii and iv FORMULAS Related Links Force Formula Inverse Of A 3x3 Matrix Formula Altitude Of Isosceles Triangle Latent Heat Of Vaporization Formula Revenue Calculation Formula Instantaneous Rate Of Change Formula Specific Heat Capacity Formula Mass Formula Physics Area Of Tank Formula Maclaurin Series Formula The tendency of population to remain in genetic equilibrium may be disturbed by random mating lack of migration lack of mutations lack of random mating Join BYJU'S Formulas Learning Program
CommonCrawl
Contributed Papers Session code: cp Session type: Contributed Sessions Gantumur Tsogtgerel (McGill University, Canada) Tuesday, Jul 25 [McGill U., Burnside Hall, Room 1B45] 11:45 Abdullahi Adem (North-West University), Multiple wave solutions and conservation laws of the Date-Jimbo-Kashiwara-Miwa (DJKM) equation via symbolic computation 12:15 Karim Samei (Bu Ali Sina University, Hamedan, Iran.), Singleton Bounds for R-additive Codes 14:15 Lahcen Laayouni (Al Akhawayn University), On the efficiency of the Algebraic Optimized Schwarz Methods 14:45 Pedro Pablo CARDENAS ALZATE (Universidad Tecnológica de Pereira), The Zhou's method for solving delay differential equations applied to biological models 15:45 Domingo Tarzia (CONICET and Universidad Austral), The one-phase Stefan problem with a latent heat of fusion depending of the position of the free boundary and its velocity 16:15 Nazish Iftikhar (National University of Computer and Emerging Sciences, Lahore Campus, Pakistan.), Classifying Robertson-Walker scale factor using Noether's approach 17:00 Imran Naeem (Lahore University of Management sciences (LUMS), Pakistan), A new approach to construct first integrals and closed-form solutions of dynamical systems for epidemics 17:30 Rehana Naz (Lahore School of Economics, Pakistan), The first integrals and closed-form solutions of optimal control problems Wednesday, Jul 26 [McGill U., Burnside Hall, Room 1B45] 11:15 Vladislav Bukshtynov (Florida Institute of Technology), Optimal Reconstruction of Constitutive Relations for Porous Media Flows 11:45 Buthinah Bin Dehaish (King Abdullaziz University), Fixed Point Theorem for monotone Lipschitzian mappings 13:45 Chuang Xu (University of Alberta), Best finite constrained approximations of one-dimensional probabilities 14:15 Eric Jose Avila (Universidad Autonoma de Yucatan), Global dynamics of a periodic SEIRS model with general incidence rate 14:45 Eugen Mandrescu (Holon Institute of Technology, Israel), Shedding vertices and well-covered graphs 15:15 Yuan-Jen Chiang (University of Mary Washington), Leaf-wise Harmonic Maps of Manifolds with 2-dimensional Foliations 16:15 Stefan Veldsman (Nelson Mandela University), Generalized complex numbers over near-fields 16:45 Ryad Ghanam (Virginia Commonwealth University in Qatar), Non-Solvable subalgebras of gl(4,R) Abdullahi Adem North-West University Multiple wave solutions and conservation laws of the Date-Jimbo-Kashiwara-Miwa (DJKM) equation via symbolic computation In this talk, we present soliton solutions and conservation laws for the DJKM equation with the aid of symbolic computation. The soliton solutions of the DJKM equation are constructed by using the multiple exp-function method, which is a generalization of Hirota's perturbation scheme. The solu- tions obtained involve generic phase shifts and wave frequencies. Furthermore, in nitely many conservation laws are derived by using the multiplier method which is an indicator of the integrability of the underlying equation. Location: McGill U., Burnside Hall, Room 1B45 Karim Samei Bu Ali Sina University, Hamedan, Iran. Singleton Bounds for R-additive Codes Shiromoto (Linear algebra Applic 295 (1999) 191-200) obtained the basic exact sequence for the Lee and Euclidean weights of linear codes over $\mathbb{Z}_{\ell}$ and as an application, he found the Singleton Bounds for linear codes over $\mathbb{Z}_{\ell}$ with respect to Lee and Euclidean weights. Huffman (Adv. Math. Commun 7 (3) (2013) 349-378) obtained the Singleton Bound for $\mathbb{F}_{q}$-linear $\mathbb{F}_{q^{t}}$-codes with respect to Hamming weight?. ?Recently the theory of $\mathbb{F}_{q}$-linear $\mathbb{F}_{q^{t}}$-codes were generalized to $R$-additive codes over $R$-algebras by Samei and Mahmoudi. In this paper, we generalize Shiromoto's results for linear codes over $\mathbb{Z}_{\ell}$ to $R$-additive codes. As an application, when $R$ is a chain ring, we obtain the Singleton Bounds for $R$-additive codes over free $R$-algebras. Among other results, the Singleton Bounds for additive codes over Galois rings are given. MSC code(s): 94B05 Lahcen Laayouni Al Akhawayn University On the efficiency of the Algebraic Optimized Schwarz Methods In this study we investigate the efficiency of the Algebraic Optimized Schwarz Methods (AOSM) in solving large-scale linear systems. The AOSM used as preconditionners in solving linear systems converge in two iterations for a decomposition with two sub-domains using optimal transmission blocks. These blocks require the inverse of large sub-matrices of the original matrix of the linear system. In this paper we are interested in approximating the transmission blocks with adequate approximations. Numerical comparisons will be presented for different types of problems. This is joint work with M. Gander and D. Szyld. MSC code(s): 65M55 Pedro Pablo CARDENAS ALZATE Universidad Tecnológica de Pereira The Zhou's method for solving delay differential equations applied to biological models In this work, we apply the Zhou's method or Differential Transfor- mation Method (DTM) for solving some models that arises in biological sciences, which are nonlinear delay differential equations. The efficiency of DTM is illustrated by investigating the convergence results on numerical models that show the reliability and accuracy of this method. MSC code(s): 132 Domingo Tarzia CONICET and Universidad Austral The one-phase Stefan problem with a latent heat of fusion depending of the position of the free boundary and its velocity From the one-dimensional consolidation of fine-grained soils with threshold gradient, it can be derived a special type of Stefan problems where the seepage front, due to the presence of this threshold gradient, exhibits the features of a moving boundary. In this kind of problems, in contrast with the classical Stefan problem, the latent heat depends inversely with the rate of change of the seepage front (e.g. Zhou-Bu-Lu, Int. J. Numerical and Analytical Methods in Geomechanics, 37 (2013), 2825-2832). A one-phase Stefan problem with a latent heat that not only depends on the rate of change of the free boundary but also on its position is studied. The aim of this analysis is to extend prior results, finding an analytical solution that recovers, by specifying some parameters, the solutions already examined in the literature regarding Stefan problems with variable latent heat. Moreover, we also consider different boundary conditions at the fixed face. This is a joint paper with Julieta Bollati (CONICET and Universidad Austral). MSC code(s): 35R35 Nazish Iftikhar National University of Computer and Emerging Sciences, Lahore Campus, Pakistan. Classifying Robertson-Walker scale factor using Noether's approach The universe can be depicted in the best way by using Friedmann-Robertson-Walker (FRW) models. FRW models of the universe are considered to have properties like homogeneity and isotropy. The universe is continuously expanding which can be represented by considering Robertson-Walker scale factor. Robertson-Walker scale factor is the function of time 't'. The scale factor is useful to define red shift and the Hubble parameter. The Hubble parameter gives information about the evolution of the universe and is also useful in calculating the age of the universe. In present research work, Noether's approach was applied to classify FRW spacetime. The spacetime was considered for three types of universe i.e. closed, open, and flat. For closed, open and flat universe, curvature parameter 'k' was -1, 1, and 0 respectively. Different values of Robertson-Walker scale factor were considered which gave the nontrivial symmetries. By using Noether equation and Perturbed Lagrangian an over-determined system of partial differential equations were obtained. For the closed, open and flat universe, maximal and minimal set of Noether operators were acquired. For every Noether operator, the corresponding energy type first integral of motion was calculated. MSC code(s): 35Dxx Imran Naeem Lahore University of Management sciences (LUMS), Pakistan A new approach to construct first integrals and closed-form solutions of dynamical systems for epidemics A new class of non-standard Hamiltonian known as the "artificial Hamiltonian" is introduced which results in an artificial Hamiltonian system of first-order ordinary differential equations (ODEs). The notion of an artificial Hamiltonian is developed for the systems of dynamical systems of ODEs. Also, it is shown that every system of second-order ODEs can be expressed as an artificial Hamiltonian system of first-order ODEs. The newly developed notion of an artificial Hamiltonian system gives a new way to solve the dynamical systems of first-order ODEs or systems of second-order ODEs which can be expressed as an artificial Hamiltonian system by utilizing the known techniques applicable to the non-standard Hamiltonian systems. We employed this proposed notion to solve dynamical systems of first-order ODEs arising in epidemics. MSC code(s): 37K05 Rehana Naz Lahore School of Economics, Pakistan The first integrals and closed-form solutions of optimal control problems The Pontrygin's maximum principle (Pontryagin, 1987) provides the necessary conditions for the optimum in the optimal control problems in terms of variables time $t$, state variables $q^i$, costate variables $p_i$ and control variables $u_i$. One can eliminate the control variables in terms of state and co-state variables which reduces the conditions of Pontrygin's maximum principle to following non-standard Hamiltonian system: $$ \dot q^i=\frac{\partial H}{\partial p_i}, \dot p^i=-\frac{\partial H}{\partial q_i}+\Omega^i(t,q^i,p_i.)$$ This type of non-standard Hamiltonian system arises widely in optimal control problems in different fields of the applied mathematics. A mechanical system with non-holonomic nonlinear constraints and non-potential generalized forces results in a non-standard Hamiltonian system. In optimal control problems of economic growth theory involving a non-zero discount factor these type of system arise and are known as a current value Hamiltonian systems. It is proposed how to modify the partial Hamiltonian approach proposed earlier for the current value Hamiltonian systems arising in economic growth theory Naz et al 2014 in order to apply it to the epidemics, mechanics and other areas as well. To show the effective of the approach developed here, it is utilized to construct the first integrals and closed form solutions of some models from real world. Moreover, the essential aspects of infectious diseases spread are uncovered and polices are provided to public health decision makers to compare and implement different control programs. For the Economic growth model some policies are provided to the government in order to have a sustainable growth. Vladislav Bukshtynov Florida Institute of Technology Optimal Reconstruction of Constitutive Relations for Porous Media Flows Comprehensive full-physics models for flow in porous media typically involve convection-diffusion partial differential equations whose parameters are unknown and have to be reconstructed from experimental data. Quite often these unknown parameters are coefficients represented by space-dependent, sometimes correlated, functions, e.g. porosity, permeability, transmissibility, etc. However, special complexity is seen when the reconstructed properties are considered as state-dependent parameters, e.g. the relative permeability coefficients $k_{rp}$. Modern petroleum reservoir simulators still use simplified approximations of $k_{rp}$ as single variable functions of $p$-phase saturation $s_p$ given in the form of tables or simple analytical expressions. This form is hardly reliable in modern engineering applications used, e.g., for enhanced oil recovery, carbon storage, modeling thermal and capillary pressure relations. Thus, the main focus of our research is on developing a novel mathematical concept for building new models where $k_{rp}$ are approximated by multi-variable functions of fluid parameters, namely phase saturations $s_p$ and temperature $T$. Reconstruction of such complicated dependencies requires advanced mathematical and optimization tools to enhance the efficiency of existing engineering procedures with a new computational framework generalized for use in various earth science applications. Buthinah Bin Dehaish King Abdullaziz University Fixed Point Theorem for monotone Lipschitzian mappings Among this talk we will consider a new class of Lipschitzian mappings which are monotone and then we will discuss some fixed point theorems for these mappings. MSC code(s): 47 Chuang Xu Best finite constrained approximations of one-dimensional probabilities This paper studies best finitely supported approximations of one-dimensional probability measures with respect to the $L^r$-Kantorovich (or transport) distance, where either the locations or the weights of the approximations' atoms are prescribed. Necessary and sufficient optimality conditions are established, and the rate of convergence (as the number of atoms goes to infinity) is discussed. Special attention is given to the case of best uniform approximations (i.e., all atoms having equal weight). The elementary approach is based on best approximations of (monotone) $L^r$-functions by step functions, which is different from, and naturally complementary to, the classical Voronoi partition approach. This is a joint work with Dr. Arno Berger. Eric Jose Avila Universidad Autonoma de Yucatan Global dynamics of a periodic SEIRS model with general incidence rate We consider a family of periodic SEIRS epidemic models with a fairly general incidence rate, we will show that the basic reproduction number determines the global dynamics of the models and it is a threshold parameter for persistence. We estimate the basic reproduction number and we provide numerical simulations to illustrate our findings. MSC code(s): 34C25 Eugen Mandrescu Holon Institute of Technology, Israel Shedding vertices and well-covered graphs A set $S$ of vertices in a graph $G$ is independent if no two vertices from $S$ are adjacent. If all maximal independent sets are of the same cardinality, then $G$ is well-covered (or unmixed) (Plummer, 1970). $G$ belongs to class $\mathbf{W}_{2}$ if every $2$ disjoint independent sets are included in $2$ disjoint maximum independent sets (Staples, 1975). There are deep interactions between shellability, vertex decomposability and well-coveredness (Castrill\'{o}n, Cruz, Reyes, 2016). Let $v\in V\left( G\right) $ and $N\left( v\right) $ be its neighborhood. If for every independent set $S$ of $G-\left( N\left( v\right) \cup\{v\}\right) $, there is some $u\in N\left( v\right) $ such that $S\cup\left\{ u\right\} $ is independent, then $v$ is a shedding vertex of $G$ (Woodroofe, 2009). Let $Shed\left( G\right) $ denote the set of all shedding vertices. Clearly, no isolated vertex is shedding, and no graph in $\mathbf{W}_{2}$ has isolated vertices. In this talk, we show that deleting a shedding vertex does not change the maximum size of a maximal independent set including a given independent set. Specifically, for well-covered graphs, it means that a non-isolated vertex $v\in Shed\left( G\right) $ if and only if $G-v$ is well-covered. Thus $G$ belongs to class $\mathbf{W}_{\mathbf{2}}$ if and only if $Shed\left( G\right) =V\left( G\right) $. There exist well-covered graphs without shedding vertices; e.g., $C_{7}$. On the other hand, there are non-well-covered graphs with $Shed\left( G\right) =V\left( G\right) $. Problem 1. Find all well-covered graphs having no shedding vertices. Problem 2. Find all graphs having $Shed\left(G\right) =V\left( G\right)$. Yuan-Jen Chiang University of Mary Washington Leaf-wise Harmonic Maps of Manifolds with 2-dimensional Foliations In the 1980s, A. Connes [Proc. of Symp. in Pure Math, AMS, 1982] and E. Ghys [J. Func. Anal., 1988] proved the Gauss-Bonnet type theorem for compact manifolds with 2-dimensional foliations. In this paper, we derive the expressions of harmonic non $\pm$ holomorphic maps of Riemann surfaces. We study the relationship between leaf-wise harmonic maps and harmonic maps. We investigate the Gauss-Bonnet type theorem for leaf-wise harmonic maps between manifolds with 2-dimensional foliations, which generalize the results of Connes and Ghys. This paper has recently appeared in the Bulletin of the Institute of Mathematics, Academia, Sinica. MSC code(s): 58E20 Stefan Veldsman Nelson Mandela University Generalized complex numbers over near-fields In the early 20th century, Dickson (1905) investigated the redundancy or not of the field axioms. By a clever disturbance of the multiplication of a field, he demonstrated the existence of an algebraic structure fulfilling all the requirements of a field except one of the distributive axioms. These structures are known as Dickson near-fields, but there are many near-fields not of this type. Almost immediately near-fields were shown to be not just an algebraic curiosity. Veblen and Wedderburn (1907) showed that near-fields are exactly the algebraic structures required for coordinization of geometries that lead to non-Desarguesian planes. In a monumental paper, Zassenhaus (1935/6) showed that all finite near-fields are Dickson near-fields except for 7 strays. There are many other applications of near-fields and the more general near-rings became an important and useful area of investigation with its own concerns and problems catering for non-linear algebraic systems. The construction of the complex numbers over the reals has been generalized in many ways leading to the 2-dimensional elliptical complex numbers (= complex numbers) and the parabolic and hyperbolic complex numbers. These can be extended to higher dimensions and using an arbitrary ring as the base ring. It is possible to define matrices and polynomials over near-rings. Using these, one can construct generalized complex numbers over a near-field. In this talk, this construction will be formalized. We also report on properties of this algebraic structure and highlight similarities and differences with its motivating example; the usual complex numbers over the real field. MSC code(s): 16Y30 Ryad Ghanam Virginia Commonwealth University in Qatar Non-Solvable subalgebras of gl(4,R) In this talk, I will present all the simple, then semi-simple, subalgebras of gl(4, R). Each such semi-simple subalgebra acts by commutator on gl(4, R). In each case the invariant subspaces are found and the results used to determine all possible subalgebras of gl(4, R) that are not solvable MSC code(s): 22 E
CommonCrawl
Make the correct alternative in the following question: A student was asked to prove a statement P(n) by induction. He proved P(k +1) is true whenever P(k) is true for all k > 5 ∈">∈∈ N and also P(5) is true. On the basis of this he could conclude that P(n) is true. (a) for all $n \in \mathbf{N}$ (b) for all n > 5 (c) for all n ≥">≥≥ 5 (d) for all n < 5 As, P(5) is true and $\mathrm{P}(k+1)$ is true whenever $\mathrm{P}(k)$ is true for all $k>5 \in \mathbf{N}$. By the definition of the priniciple of mathematical induction, we get P(n) is true for all n ≥">≥≥ 5. Hence, the correct alternative is option (c).
CommonCrawl
How do I replace adjacent matrix elements? I have a matrix where I would like to look for all elements around element x (in this example -2) and replace all the ones that are 0 with the number 1. My matrix (matrix0) looks as follows: \begin{array}{cccccccccc} -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\ \end{array} matrix0 = {{-1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, {-1, 0, 0, 0, 0, 0, 0, 0, 0, -1}, {-1, -1, -1, -1, -1, -1, -1, -1, -1, -1}}; I wrote a module, which gives me the positions of the elements around my x like this: Clear[around] (* The following module gives a list of all positions next to that of x0 in matrix mat0 *) around[x0_, mat0_] := Module[{x, mat, pos}, x = x0; mat = mat0; pos[x_] := Flatten[Position[mat, x]]; Cases[Tuples[Range[Dimensions[mat][[1]]], 2], {a_, b_} /; (a == pos[x][[1]] && b == pos[x][[2]] - 1) || (a == pos[x][[1]] - 1 && b == pos[x][[2]]) || (a == pos[x][[1]] - 1 && b == pos[x][[2]] - 1) || (a == pos[x][[1]] + 1 && b == pos[x][[2]] + 1) || (a == pos[x][[1]] && b == pos[x][[2]] + 1) || (a == pos[x][[1]] + 1 && b == pos[x][[2]]) || (a == pos[x][[1]] + 1 && b == pos[x][[2]] + 1)] For -2 this yields the result: around[-2, matrix0] {{4,9},{4,10},{5,9},{6,9},{6,10}} I now tried to let all of these elements which are 0 be replaced by 1, however Mathematica is only giving me a Null output. Any ideas why this doesn't work? For[i=1,i<=Length[around[-2,matrix0]],i++, If[Part[matrix0,around[-2,matrix0][[i,1]],around[-2,matrix0][[i,2]]]==0, ReplacePart[matrix0,around[-2,matrix0][[i]]->1],Null]] EDIT: These answers are already helping me learn new functions thank you. However my question mainly was why my version does not work seeing as I can't find an illogical step in it. The reason my version is so unnecessarily complicated is partly because I am a Mathematica beginner but also because I want to generalise this method to build up my matrix. After having changed the 0s around -2 to 1s, I would like to change the 0s around all 1s to 2s and the 0s around all 2s to 3s and so on. Any suggestion what I should look into to solve this on my own? Or is my approach just fundamentally doomed to failure? list-manipulation matrix replacement kglr ThatEpicDudeThatEpicDude $\begingroup$ Try {m, n} = Dimensions[mat0]; ReplacePart[mat0, Cases[First[Position[mat0, x0]] + # & /@ DeleteCases[Tuples[{-1, 0, 1}, 2], {0, 0}], {j_, k_} /; 1 <= j <= m && 1 <= k <= n] -> 1]. $\endgroup$ – J. M. can't deal with it ♦ $\begingroup$ It would be very helpful if you could share your matrix as Mathematica code rather than as TeX. $\endgroup$ – MarcoB $\begingroup$ Somewhat related: (140099) $\endgroup$ $\begingroup$ As noted, please include data that can be easily copied the next time. $\endgroup$ Update: Re: why my version does not work Documentation >> ReplacePart: ReplacePart[expr, i -> new] yields an expression in which the i'th part of expr is replaced by new. that is, it does not replace expr with the expression it yields. So, with a small change, namely assigning the value of ReplacePart[...] to matrix0 in each step of the For loop, your version also works: For[i = 1, i <= Length[around[-2, matrix0]], i++, If[Part[matrix0, around[-2, matrix0][[i, 1]], around[-2, matrix0][[i, 2]]] == 0, matrix0 = ReplacePart[matrix0, around[-2, matrix0][[i]] -> 1], Null]] matrix0 // TeXForm $\left( \begin{array}{cccccccccc} -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -2 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\ \end{array} \right)$ By the way, since ReplacePart[expr, {i, j, ...} -> new] replaces the part at position {i, j, ...} you can use much simpler matrix0 = ReplacePart[matrix0, around[-2, matrix0] -> 1] instead of using a For loop. Original answer - updated: ClearAll[aroundF, replaceF] aroundF[m_, t_] := DeleteDuplicates[Join @@ Function[{k}, Clip[#, {1, #2}] & @@@ Transpose[{k + #, Dimensions[m]}] & /@ Tuples[{-1, 0, 1}, {2}]] /@ Position[m, t]] replaceF[old_: 0, new_: 1][m_, t_] := MapAt[If[#===old, new, #] &, m, aroundF[m,t]] replaceF[][matrix0, -2] // TeXForm replaceF[0, aa][matrix0, -2] // TeXForm $ \left( \begin{array}{cccccccccc} -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \text{aa} & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \text{aa} & -2 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \text{aa} & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 \\ \end{array} \right) $ kglrkglr $\begingroup$ To the RE: Using only ReplacePart would be a good trick but that does not include that I only want parts that are 0 to be replaced does it? $\endgroup$ – ThatEpicDude Here is a version that is pretty fast: Clear[setNeighbors] setNeighbors[m_, n_, foo_Rule:(0->1)] := Module[{nmatrix,omatrix,old,new}, {old,new} = List @@ foo; nmatrix=Unitize @ ListCorrelate[ {{1,1,1}, {1,0,1}, {1,1,1}}, Unitize @ Clip[m, {n, n}, {0, 0}], {2, -2}, omatrix = 1 - Unitize[m - old]; m + omatrix nmatrix (new-old) For your example: m = {{-1, -1, -1, -1, -1, -1, -1, -1, -1, -1}, setNeighbors[m, -2] //TeXForm Here is the timing for a much larger matrix: data = RandomInteger[100, {1000, 1000}]; setNeighbors[data, 3, 1->3]; //AbsoluteTiming {0.051501, Null} Finally, it seems that @klgr's solution might have a bug. Compare: SeedRandom[2]; m = RandomInteger[10, {5, 5}]; m //TeXForm $\left( \begin{array}{ccccc} 8 & 4 & 5 & 4 & 7 \\ 4 & 0 & 1 & 0 & 4 \\ 3 & 7 & 3 & 0 & 2 \\ 7 & 8 & 7 & 9 & 3 \\ 6 & 2 & 3 & 8 & 9 \\ \end{array} \right)$ r1 = setNeighbors[m, 3, 1->3]; r1 //TeXForm r2 = setNeighborValues[1,3][m,3]; r1 === r2 I think the $m_{2, 3}$ entry should be a 3. Carl WollCarl Woll $\begingroup$ Very nice! I wanted to apply ListCorrelate too but I ran out of time yesterday so I just left a link in the comments. $\endgroup$ $\begingroup$ @Carl, thank you. The bug is fixed now. $\endgroup$ – kglr Here is a cleaner approach using Developer`PartitionMap. (m as defined in my first answer.) fn[b_] := If[# != 0, #, Total[1 - Unitize[b + 2], 2]] & @ b[[2, 2]] Developer`PartitionMap[fn, m, {3, 3}, 1, 2] // MatrixForm Mr.WizardMr.Wizard A related post: Detecting patterns of black and white stones on a 2D board I'll try to apply the same method here. I'll start by assigning your data to m: Two-dimensional replacement rules Using CellularAutomaton: p = Partition[#, 3] & /@ Permutations[Table[_, {8}]~Append~-2]; p[[All, 2, 2]] = 0; rules = Append[Thread[(Delete[p, 5]) -> 1], ArrayPad[{{x_}}, 1, _] :> x]; CellularAutomaton[rules, m, 1][[2]] // MatrixForm rep[m_, v_] := Module[{a, r, c, p}, p = {-1, 0, 1}; a = ArrayPad[m, 1]; {r, c} = FirstPosition[a, v]; a[[r + p, c + p]] = a[[r + p, c + p]] /. (0) -> 1; a[[2 ;; -2, 2 ;; -2]]] rep[m, -2] // MatrixForm eldoeldo Not the answer you're looking for? Browse other questions tagged list-manipulation matrix replacement or ask your own question. Modeling the spread of an infection in networked computers Counting adjacent elements in a Matrix? Add values from a list, at positions from a list, to a matrix Define nonzero variables Varyx = z_xy from table with dimensions x,y and values z_xy Scramble matrix under some condition Loss of 1 to 1 mapping due to duplicate in Association Replace diagonal elements in sparse matrix Replace elements of a matrix with zeros using another matrix How to speed up large double sums in a table? Efficient way of averaging over elements of one matrix based on another
CommonCrawl
NEMA NU 2–2007 performance characteristics of GE Signa integrated PET/MR for different PET isotopes Paulo R. R. V. Caribé1, 2Email authorView ORCID ID profile, M. Koole2, Yves D'Asseler1, Timothy W. Deller3, K. Van Laere2 and S. Vandenberghe1 EJNMMI Physics20196:11 https://doi.org/10.1186/s40658-019-0247-x Received: 3 February 2019 Published: 4 July 2019 Fully integrated PET/MR systems are being used frequently in clinical research and routine. National Electrical Manufacturers Association (NEMA) characterization of these systems is generally done with 18F which is clinically the most relevant PET isotope. However, other PET isotopes, such as 68Ga and 90Y, are gaining clinical importance as they are of specific interest for oncological applications and for follow-up of 90Y-based radionuclide therapy. These isotopes have a complex decay scheme with a variety of prompt gammas in coincidence. 68Ga and 90Y have higher positron energy and, because of the larger positron range, there may be interference with the magnetic field of the MR compared to 18F. Therefore, it is relevant to determine the performance of PET/MR for these clinically relevant and commercially available isotopes. NEMA NU 2–2007 performance measurements were performed for characterizing the spatial resolution, sensitivity, image quality, and the accuracy of attenuation and scatter corrections for 18F, 68Ga, and 90Y. Scatter fraction and noise equivalent count rate (NECR) tests were performed using 18F and 68Ga. All phantom data were acquired on the GE Signa integrated PET/MR system, installed in UZ Leuven, Belgium. 18F, 68Ga, and 90Y NEMA performance tests resulted in substantially different system characteristics. In comparison with 18F, the spatial resolution is about 1 mm larger in the axial direction for 68Ga and no significative effect was found for 90Y. The impact of this lower resolution is also visible in the recovery coefficients of the smallest spheres of 68Ga in image quality measurements, where clearly lower values are obtained. For 90Y, the low number of counts leads to a large variability in the image quality measurements. The primary factor for the sensitivity change is the scale factor related to the positron emission fraction. There is also an impact on the peak NECR, which is lower for 68Ga than for 18F and appears at higher activities. The system performance of GE Signa integrated PET/MR was substantially different, in terms of NEMA spatial resolution, image quality, and NECR for 68Ga and 90Y compared to 18F. But these differences are compensated by the PET/MR scanner technologies and reconstructions methods. NEMA NU 2–2007 The Signa PET/MR has MR-compatible silicon photomultiplier (SiPM) detector technology characterized by a superior light detection as compared to conventional PET technology [1–3]. The advantage of SiPMs versus avalanche photodiodes (APDs) is a faster response, enabling the combination of excellent time-of-flight (TOF) PET (close to 400 ps) imaging with MR scanning. The smaller detector bore and long axial extent (25 cm) of the PET ring (in comparison to state-of-the-art PET/CT) result in a superior sensitivity of 21 cps/kBq, thus allowing a lower PET tracer dosing besides the evident dose reduction by omitting the CT [4]. Hybrid PET/MR is a relatively new multimodality imaging technique and offers the potential for combined structural, functional, and molecular imaging assessment of a wide variety of oncologic, neurologic, cardiovascular, and musculoskeletal conditions [5, 6]. However, the challenges beyond those of a technical nature remain for PET/MR imaging, including the standardization of appropriateness criteria, image acquisition parameters, and clinically relevant and as well commercially available isotopes. With PET becoming more widely used, the transport logistics have allowed faster shipments of radioisotopes to small imaging centers. The majority of PET studies in clinical routine are still being performed with 18F, because of its physical properties combined with efficient transportation logistics which widely increase its availability. The same holds for 68Ga and 90Y, of which the use is not dependent on the availability of a cyclotron. However, the physical properties of the PET radioisotopes are quite different from 18F. 18F almost exclusively decays via positron emission (96.8%) and with a relatively low maximum energy of the positron of 0.6335 MeV. The maximum and mean range of 18F are equal to 2.4 and 0.6 mm. The other 3% of decays is via electron capture [7]. The use of the generator-based isotope 68Ga has seen a steady increase in the last years. It is obtained from a 68Ge/68Ga generator obviating the need for a cyclotron on site. One generator will typically be used for about 1 year [8] and the equilibrium between 68Ga and 68Ge is re-attained rapidly enough to allow multiple radiotracers preparations a day. 68Ga is used for labeling both small compound and macromolecules, such as 68Ga-PSMA targeting the prostate-specific membrane antigen or 68Ga-labeled tracers targeting the somatostatin receptor expressed by neuroendocrine tumors, which are considered as key applications for combined PET/MR [9–11]. 68Ga is not a pure positron emitter and has a more complex decay scheme than 18F. Non-pure isotopes emit additional gammas that may even directly fall into the energy window accepted by the PET scanner. These high-energy gammas have some probability of generating spurious coincidences after scattering in the patient or via e+/e− pair production in the detector or the patient [12–14]. In 87.8% of the decays, 68Ga will emit a positron with a maximum energy of 1.899 MeV and a mean energy of 0.89 MeV with a half-life t1/2 = 67.6 min. The much higher energy of the positron emission (compared to 18F) leads to an increased maximum and mean range of 8.9 and 2.9 mm. It also emits additional gammas of 578.52 keV (0.034), 805.83 keV (0.094), 1077.35 keV (3.22), 1261.08 keV (0.094), and 1883.16 keV (0.137). In the same way, 90Y has rapidly gained attention as one of the most widely used therapeutic radioisotope in nuclear medicine. 90Y is used in radioembolization of liver tumors. Tiny glass or resin beads called microspheres are administered in the hepatic artery and are transported into the blood vessels at the tumor site. The spheres get physically trapped and the radioactive isotope 90Y delivers a high dose (via electrons) of radiation to the tumor. Several centers also use their PET system to image the therapeutic isotope 90Y. Studies have shown that 90Y-DOTA and 90Y-DTPA have potential in intra-vascular radionuclide therapy and 90Y can simultaneously work as an imaging agent and a therapeutic [15–17]. 90Y is mainly a β− emitter with a very small branching ratio for positron production. In 0.003186% of the decays, there will be the emission of an e+/e− pair at 1.76 MeV. As the transition energy is 1.76 MeV, it remains 738 keV kinetic energy to be split between the electron and the positron in order to conserve the null momentum. With a half-life of 64.1 h, 90Y produces a weak but useable PET signal [7, 18], as illustrated by several clinical and phantom studies. Furthermore, these isotopes may be of particular interest for PET/MR in prostate cancer, liver studies, and follow-up of radionuclide therapy. Therefore, it is relevant to determine the performance of PET/MR for these clinically relevant and commercially available isotopes in order to ensure correct functionality and optimal image quality. For PET scanners in particular, the National Electrical Manufacturers Association (NEMA) has defined a standard to assess the performance of the tomographic system, which is widely accepted by manufacturers [19]. The NEMA NU 2–2007 standard identifies 18F as the radionuclide to be used for all tests. Due to factors such as positron range, interference with magnetic field, and non-pure emissions with additional gammas, the results may be different when non-conventional radioisotopes, such as 68Ga and 90Y [20–22], would be used. The aim of this study is to assess the impact of using different PET isotopes for the NEMA tests performance evaluation of the GE Signa integrated PET/MR. NEMA NU 2–2007 performance measurements for characterizing spatial resolution, sensitivity, image quality, accuracy of attenuation and scatter corrections (IQ), and noise equivalent count rate (NECR) were performed using 18F and 68Ga. For 90Y, all tests except NECR tests were also performed. All phantom experiments were performed on the MP24 version of the GE Signa integrated PET/MR whole-body hybrid system, installed in UZ Leuven, Belgium. The MR component of the hybrid system consists of a 3.0 Tesla static magnetic field, a radiofrequency (RF) transmit body coil, and a gradient coil system which provide a maximum amplitude of 44 mT/m and a maximum slew rate of 200 T/m/s. The PET component is comprised of 5 detector rings, each consisting of 28 detector blocks. Total axial FOV is equal to 25 cm. Table 1 contains a summary of important design and performance parameters. Design and PET performance specifications Axial FOV 25 cm Transaxial FOV Photodetector SiPM Scintillator LYSO Crystal element size 25 × 4.0 × 5.3 mm3 Integrated in-bore Time resolution < 400 ps Energy window 425–650 keV Coincidence timing window 4.57 (± 2.29 ns) NEMA NU 2–2007 PET testa 22.5 cps/kBq Peak NECR 212.2 kcps Activity at Peak NECR 18.1 kBq/ml Scatter Fraction at Peak NECR Spatial Resolution (Axial) (5.53–6.95) mm at 1 and 10 cm aGE healthcare acceptance tests [23] The PET detectors are based on a lutetium-based scintillator (LYSO) readout with MR-compatible silicon photomultiplier technology [3]. Before NEMA testing, a well counter calibration scan was performed with 18F in a uniform cylindric phantom. As a recommended calibration, the activity injected was measured using two dose calibrators (Capitec–CRC-55tR) with settings for different isotopes. The following measurements were performed according to the NEMA NU 2–2007 protocol. Spatial resolution A high activity concentration of approximately 200 MBq/ml was used to generate point sources (drop of activity raised in a capillary). In total, more than 500,000 counts were acquired. Both axial and transaxial resolution were measured at two different positions in the axial z-direction: in the central position of the FOV and at a position a quarter of the total axial FOV away from the center. The source point position was adjusted until all x-y-z values fall between ± 2 mm from the required position. At each of these axial positions, the resolution was measured centrally in the FOV (1 cm horizontal offset relative to the center) as well as 10 cm horizontal offset and 10 cm vertical offset relative to the center. Data was reconstructed with filtered back-projection. The full width at half maximum (FWHM) and full width at tenth of maximum (FWTM) of the point source response function in all three directions were determined by one-dimensional response functions along profiles though the image volume in three orthogonal directions. Sensitivity was tested with the NEMA sensitivity phantom, composed of a line source with 5 different thicknesses of aluminum. The 70-cm-long line source was filled with a volume of approximately 2.3 ml. The activity level was equal to 10.7 MBq and 8.7 MBq at scan start for 18F and 68Ga. For 90Y, the activity level was adjusted to 444.9 MBq, to compensate for the low positron abundance and to keep the scan time acceptable. Using this activity, the total number of counts collected was above 2 million counts. This measurement was done in the center of the FOV and at 10 cm away from the center of the FOV. Data was collected for a period of time to ensure that at least 10.000 trues per slice were collected. The system sensitivity was calculated by fitting the decay-corrected count rate of each acquisition to an exponential and extrapolating the value for a hypothetical acquisition with no aluminum tubes over the source (no attenuation). Axial sensitivity profiles were generated by calculating the sensitivity of each slice for the transaxially centered data acquisitions that used only the smallest aluminum tube. Scatter fraction, noise equivalent count rate (NECR) A 70-cm-long plastic tube line source (3.2 mm in inner diameter) was filled with a calibrated activity of 905 MBq and 871 MBq in a 5.0 ml of solution for 18F and 68Ga, respectively. The line source was inserted 4.5 cm below the central axis of a 70-cm-long cylindrical polyethylene test phantom. The center of the NEMA scatter phantom was positioned at the FOV center and the data were acquired overnight. After twenty-nine frames of data were extracted from the list mode data, NEMA specifications were used to derive the trues, randoms, scatter, and NECR from the prompts dataset in each frame. The results were plotted as a function of effective activity concentration. In addition, the accuracy of count losses and randoms corrections was determined by extrapolating image results from low count rates. Image quality, accuracy of attenuation, and scatter corrections Image quality was measured by acquiring the NEMA image quality phantom. A 5-cm-diameter cylindrical insert filled with Styrofoam pellets was positioned in the center of the phantom to simulate lung tissue. The warm background volume of the phantom was filled with an activity concentration of 5.3 kBq/ml. Four hot spheres with diameters of 10, 13, 17, and 22 mm were filled with an activity concentration 4 times the background for 18F and 68Ga. For 90Y, a higher 8:1 ratio was used since typical contrasts in liver therapies are normally higher than 4:1. The two cold spheres with diameters of 28 and 37 mm were filled with water (except for the 90Y image quality test, in which all of the spheres were filled with an 8:1 ratio of activity). Background activity from outside the scanner FOV was generated by a line source inserted into the same cylindrical phantom as used in the scatter fraction, count losses, and randoms measurement. It contains 116 MBq solution of the isotope used in the measurement and was placed on the bed axially adjacent to the body phantom. The percentage contrast recovery for the hot and cold spheres and the background variability were calculated, as defined in the NEMA standard. The percentage contrast recovery (in an ideal case = 100%) is determined for each hot sphere j by $$ {Q}_{S,j}=\frac{\left({C}_{S,j}/{C}_{B,j}\right)-1}{\left({a}_S/{a}_B\right)-1}\bullet 100\left[\%\right] $$ Where CS, j is the average counts of regions of interest (ROIs) on the spheres. These are positioned in the transverse image slice that contains the centers of the spheres. CB, j represents the average counts in the background ROI. The terms aS and aB are activity concentration in the hot spheres and background, respectively. The phantom has also 2 large spheres which are not filled with isotope. For each nonradioactive sphere j, the percentage contrast recovery QC, j was calculated by $$ {Q}_{C,j}=\left(1-\frac{C_{C,j}}{C_{B,j}}\right)\bullet 100\left[\%\right] $$ Where CC, j and CB, j are average counts in the ROI for sphere j and average of all background ROI counts for sphere j. In order to determine the percentage background variability Nj as a measure for the image noise for sphere j (in an ideal case = 0%), the following equation was used: $$ {N}_j=\left(\frac{SD_j}{C_{B,j}}\right)\bullet 100\left[\%\right] $$ SDj is the standard deviation of the background ROI counts for sphere j. In addition, the central cylinder of the phantom did not contain any activity, and the relative error was calculated to determine the accuracy of scatter and attenuation correction as follows: $$ \Delta {C}_{\mathrm{lung},i}=\frac{C_{\mathrm{lung},i}}{C_{B,i}}\bullet 100\left[\%\right] $$ Where ΔClung, i is the relative error per percentage units for each slice i, Clung, i is the average counts in the lung insert ROI, and CB, i is the average of the 60 (37 mm) background ROIs drawn for the image quality analysis [19]. Image-quality phantom images The phantom data were obtained with a 10-min scan for 18F and 68Ga acquisitions. For 90Y, a long 15-h scan time and a shorter 30 min representing a clinical acquisition were obtained using list mode data selection. The image quality phantom was reconstructed in a volume of 89 images using ordered subset expectation maximization reconstruction algorithm (OSEM) including time-of-flight (TOF) and scatter corrections with 2, 3, and 4 iterations of 28 subsets. The scatter correction for 68Ga and 90Y were defined as dirty emitters to account for the gamma. These isotopes allow an additional fitting parameter in the scatter tail scaling process [22]. All reconstruction schemes were performed using a matrix size of 256 × 256 with 2.08 × 2.08 × 2.78-mm3 voxel size, with 2 mm and no post-smoothing with and without point spread function (PSF). The GE PET/MR uses a system-generated approach that includes a CT-based template attenuation correction for the NEMA IQ phantom. The spatial resolution (FBP reconstruction) results for each isotope are presented in Table 2. As a double check, the 18F-measured values (at the same equipment across all isotopes) were used as a reference for all measurements performed on the GE Signa PET/MR. The FWHM radial and tangential resolution is slightly degraded for 68Ga and 90Y. In comparison with 18F-measured values, the relative differences were 17.8% and − 1.3% at 1 cm and 27.9% and 3.5% at 10 cm in the axial direction for 68Ga and 90Y, respectively. With regards to the FWTM in the axial resolution at 1 and 10 cm off-center, the percentage difference relative to 18F values were 70%, and 3.3% at 1 cm and 57.3% and 12.3% at 10 cm off-center for 68Ga and 90Y, respectively. Spatial resolution tests FWHM (mm) At 1 cm Transversea At 10 cm FWTM (mm) aTransverse (radial and tangential values are averaged together) Sensitivity test results for 18F, 68Ga, and 90Y are 21.8, 20.1, and 0.653·10−3 cps/kBq at the center position and 21.2, 19.7, and 0.667·10−3 cps/kBq at 10 cm off-center. Table 3 gives the measured average sensitivity values at the transaxial center and 10 cm off-center. Also, data are compared to the average 18F measured and theoretical values. These estimated values were calculated based on the difference in branching ratio relative to the 18F. The sensitivity values are in line with the lower positron fraction of the isotopes. Sensitivity test results: comparison between the average sensitivity measured and theoretical values relative to 18F Branching ratio Average sensitivity measured (cps/kBq) Theoretical values (cps/kBq)a 31.86·10−6 0.6 6 · 10−3 0.71·10−3 aValues relative to 18F-measured value The peak NECR, the corresponding activity concentration and scatter fraction at peak NECR, is presented in Table 4 and Fig. 1 for 18F and 68Ga. Table 4 summarizes the comparison between 18F and 68Ga in terms of scatter fraction at peak NECR, peak NECR, activity concentration at peak NECR, and maximum absolute error. Scatter Fraction, peak NECR, and source activity test results for 18F and 68Ga Scan type Measured values kcps kBq/ml Maximum absolute error NEMA counting rate measurements: count rates (a) and scatter fraction (b) vs activity concentration for the both isotopes 18F and 68Ga. Notice than peak of the NECR curve (a) of 68Ga is lower (at clinical NECR) and appears at higher activity concentrations For 18F, the NECR has a maximum of 216.8 kcps at an activity concentration of 18.6 kBq/ml. At the peak, the scatter fraction was 43.3%, comparable to the results obtained on three separate scanners installed in three institutions [23]. For 68Ga, the trues, random, and scatter count rates at the same activity concentration are lower in comparison to 18F. The measured NECR peak for 68Ga was also clearly lower and peaks at 205.6 kcps. This peak is obtained at a higher activity concentration of 20.4 kBq/ml. The scatter fraction at peak NECR was below 1% lower compared to 18F measured in our institution (Fig. 1a and Table 4). After the full 15-h acquisitions (for both isotopes), the maximum absolute value of the slice error was 3.0% and 6.0% for 18F and 68Ga. The results for contrast recovery versus sphere diameter of the image quality phantom are shown in Fig. 2. In Fig. 2 a, the contrast recovery of the reconstructed image of the phantom without PSF and post-smooth filter was lower for 68Ga and 90Y as shown in the relative difference to 18F-measured values. The contrast recovery increased when compared using TOF, PSF, and post-smooth filter, as showed in Fig. 2 b. Contrast recovery of TOF-OSEM 4 iteration and 28 subsets as a function of sphere size for 10 min (18F and 68Ga) and 15 h (90Y) acquisition times and different isotopes. The cold and radioactive spheres are indicate using open (18F and 68Ga) and filled circles (90Y), respectively. Contrast recovery and percentage difference relative to 18F for each sphere size: a without PSF and post-smoothing filter and b with PSF and 2 mm post-smoothing filter The background variability and the lung region relative error for 18F, 68Ga, and 90Y are shown in Table 5 for different sphere sizes. 90Y has a much lower quality and suffers from low counts, for most spheres the background variability. The lung error is clearly higher than for 18F and 68Ga. This is also visually confirmed by the clinical reconstructions (30 min acquisition) in Fig. 3. Background variability and lung error of TOF-OSEM 4 iteration and 28 subsets image reconstructed, with and without PSF and 2 mm post-smooth filter for 18F, 68Ga, and 90Y respectively TOF-OSEMa TOF-PSF-OSEM 2 mm Background variability [%] 10 mm Lung error [%] a4 iterations and 28 subsets Axial slices of the phantom reconstructed using TOF-OSEM for 2, 3, and 4 iterations and 28 subsets for different acquisition times and different isotopes: a without PSF and post-smoothing filter and b with PSF and 2 mm Gaussian filter When compared in terms of number of iterations (Fig. 3), the noise level increased with increasing of the number of iterations for all isotopes. The aim of this study was to assess the impact of using different PET isotopes for the NEMA tests performance evaluation of the GE Signa integrated PET/MR. The performance and characteristics of PET/MR have been investigated based on NEMA NU 2–2012 with regard to 18F [20, 21, 23], but not for different PET isotopes. Furthermore, the NEMA NU 22012 version is only slightly different from the 2007 version; the most substantial changes are relatively minor, mostly designed to make the test easier to conduct, mere reproducible, or more clearly defined [19]. In this study, we conveniently choose to evaluate the PET/MR using NEMA NU 2–2007 because GE Healthcare has reported their acceptance testing on this version [24]. The system performance of the GE Signa integrated PET/MR was substantially different, in terms of NEMA spatial resolution and image quality for 68Ga and 90Y PET imaging test as compared to 18F. In the transverse plane, the magnetic field reduces the effective positron range, and the dominant factor on spatial resolution seems to be the detector pixel size and the transverse resolution is therefore comparable with the result of 18F. This effect was confirmed by other studies with simulations for different isotopes and field strengths [25, 26]. The main magnetic field along the axial direction leads to an increased positron range in this direction and a pronounced reduction of the range in the transversal plane for high-energy positrons. On the other hand, the positron range effect of the magnetic field is not significant for 18F [27] and no significative effect was found for 90Y. In agreement with these studies, the FWHM difference relative to 18F-measured values were 17.8% and − 1.3% at 1 cm and 27.9% and 3.5% at 10 cm off-center in the axial direction for 68Ga and 90Y, respectively, as shown in Table 2. However, the NEMA spatial resolution test is designed to characterize the detector, rather than the isotope which leads to limitations to account the effect of magnetic field on positron range on the transaxial resolution measurements. The capillary is very small, and any positron that escapes the capillary is not accounted for in the measurements. In addition, axially, the annihilation could occur in the tube-sealing compound and beyond that, the axial test is slightly poor due to the rebinning process and larger pixel size in the z-axis. The NEMA image quality test was also substantially different in the measured contrast recovery, as shown in Fig. 2. It seems that the inferior resolution also affects the contrast recovery of the radioactive spheres in the NEMA quality phantom for 68Ga and 90Y. While the results look visually (Fig. 3) similar between 18F and 68Ga for TOF-OSEM without resolution modeling and post-smooth filter, there is (Fig. 2a) a clearly lower contrast recovery for the smaller spheres in 68Ga and also lower contrast recovery in 90Y, which is probably caused by the increased positron range and loss in resolution. A similar approach using different PET isotopes in a brain phantom measured at different field strengths was conducted by Shah et al. [27]. The contrast of the reconstructed image of the brain phantom filled with 68Ga was significantly affected by the magnetic field in the axial direction more than 18F (low-energy positron emitter). However, errors in scatter correction and the use of different sphere to background ratios might have influenced these results [3, 23, 28, 29]. A low-frequency offset of the data makes the images appear to have more or less contrast recovery. And different ratios lead to different contrast recovery. This can explain the crossing of the 90Y curve (ratio 8:1) in Fig. 2. With regards to noise level and the average lung residual error (Table 5), the results were comparable between 18F and 68Ga, but for 90Y, the background variability and the lung error are clearly higher than for 18F and 68Ga, which is also visually seen (15 h and 30 min acquisition) in Fig. 3. However, when comparing the reconstructed images using resolution modeling and 2 mm post-smooth Gaussian filter to reduce the noise, the contrast recovery increased with acceptable noise level, as shown in Figs. 2b and 3b. Although high-energy positron emitters are affected by the field strengths, recent developments in reconstruction methods including dedicated positron range correction have successfully corrected this effect [30]. A new Bayesian penalized likelihood reconstruction algorithm which uses a block sequential regularized expectation maximization as an optimizer (including TOF and PSF) was introduced in the last few years by GE Healthcare (Q.Clear) on their PET scanners in order to improve clinical image quality. Unlike traditional OSEM reconstruction, which increases the noise with the number of iterations (Fig. 3), this algorithm improves image quality by controlling noise amplification during image reconstruction [31]. The mean sensitivity results shown in Table 3 are in line with theoretical values as expected for 68Ga and 90Y test. The primary factor for the sensitivity change is the scale factor related to the positron emission fraction (96.7%, 87.9%, and 0.003186% for 18F, 68Ga, and 90Y, respectively). The low branching ratio of 90Y explains the substantial quality difference of the reconstructed transverse image quality phantom as compared with 18F and 68Ga images, as shown in Fig. 3. In several design factors (Table 2) including Compton scatter recovery [31, 32], the longer axial FOV and reduced detector ring diameter lead to higher count rates and an increased sensitivity, both in stand-alone operation and with simultaneous MR image acquisition [33]. NEMA count rate performance and accuracy measurements summarized in Table 4 and Fig. 1a suggest that the scanner provides excellent accurate quantitative measurements and utilizes effective randoms and dead time correction methods for 18F [23]. For 68Ga, the scatter fraction at NECR peak (Table 4 and Fig. 1b) was slightly lower compared to 18F measured in our institution and the GE test. This is primarily due to 1.2% (1.883 MeV) fraction of 68Ga that decays by \( {\beta}_2^{+} \) can result in a small prompt gamma (1.077 MeV) [12–14] contamination into the PET data. The prompt gammas of 68Ga can directly fall into the energy window and accepted by the PET scanner. This happens when the 1.077 MeV scatters in the phantom and generates an energy falling in the main energy window. In this case, there will be a coincidence with a true 511 keV resulting from the same decay [18]. Contributions in which only the gamma is detected would add to randoms which does not affect the calculation for scatter fraction. The 68Ga NECR test was clearly lower than measured for 18F and appears at a slightly higher activity concentration (Fig. 1a). The lower peak NECR can be explained by the additional 1.077 MeV gamma which leads to additional detections increasing the deadtime of the detector blocks. These can also lead to additional randoms or scatter when they lose enough energy for falling into the energy window. However, the effect from these prompt-gammas is clearly very small from a scatter fraction perspective and concerning for all activity concentration values below NECR (Table 4), and the maximum absolute value of the slice error is 2.9% and 6.0% for 18F and 68Ga, respectively. There is also no appreciable impact in the measured residual activity of the lung insert in the IQ phantom (Table 5). In summary, the overall GE Signa PET/MR system performance with TOF capability based on SiPM detectors, shows substantially different system characteristics for each of these commercially available isotopes. However, the NEMA spatial resolution test is designed to characterize the detector, rather than the isotope, which needs to be adapted in order to well account for the effect of magnetic field on positron range on the transaxial resolution measurements. The variety of prompt gammas in coincidences of these isotopes and the interference of the MR field on positron range, however, seem to have been compensated by the PET scanner technologies, which, in combination with recently developments in reconstruction methods (regularized TOF OSEM and PSF), lead to a comparable noise equivalent count rate and a good scatter fraction. NEMA NU 2–2007 performance measurements using 18F, 68Ga, and 90Y resulted in substantially different system characteristics, specifically in terms of spatial resolution and recovery coefficients of the image quality measurements. NEMA spatial resolution test needs to be adapted in order to correctly account for the difference in positron range in the transaxial resolution measurements. And when NEMA image quality test is compared using TOF-OSEM-PSF and post-smooth gaussian filter, the contrast recovery increased with acceptable noise level. The primary factor for the sensitivity change can be explained by the scale factor related to the positron emission fraction of the isotopes. Scatter fraction and NECR differences between the 18F and 68Ga are relatively small: for 68Ga, the peak NECR is lower and appears at higher activity concentrations. The maximum absolute value of the slice error is 2.9% and 6.0% for 18F and 68Ga, respectively. These performance results are compensated by the PET scanner technologies and reconstructions methods. DTPA: Diethylenetriaminepentaacetic acid FOV: LBS: Lutetium-based scintillator MR: NECR: Noise equivalent count rate NEMA: National Electrical Manufacturers Association OSEM: Ordered subset expectation maximization PSF: Point spread function PSMA: Prostate membrane antigen Recovery coefficient Region of interest SiPM: Silicon photomultipliers SUV: Standardized uptake value TOF: Time of flight VOI: Volume of interest T. W. Deller is an employee of GE Healthcare. No other potential conflict of interest relevant to this article was reported. This present study was conducted with a grant support to the first author from CNPq, the National Council of Technological and Scientific Development – Brazil (process number 235040/2014-2). All authors contributed to design of the study. PC and MK performed data collection and analysis. All authors discussed the results and implications and commented on the manuscript. All authors read and approved the final manuscript. Medical Imaging and Signal Processing – MEDISIP, Ghent University, Corneel Heymanslaan 10, 9000 Ghent, Belgium Division of Nuclear Medicine and Molecular Imaging, UZ/KU Leuven, Herestraat 49 B-3000, Leuven, Belgium GE Healthcare, Waukesha, WI 53188-1678, USA Vandenberghe S, Marsden PK. PET-MRI: a review of challenges and solutions in the development of integrated multimodality imaging. Phys Med Biol. 2015;60(4):R115–54.View ArticleGoogle Scholar Yamamoto S, Watabe H, Kanai Y, Aoki M, Sugiyama E, Watabe T, Imaizumi M, Shimosegawa E, Hatazawa J. FPGA-based RF interference between PET and MRI sub-systems in a silicon-photomultiplier-based PET/MRI system. Phys Med Biol. 2011;56:4147–59. https://doi.org/10.1088/0031-9155/61/9/3500.View ArticlePubMedGoogle Scholar Levin CS, Maramraju SH, Khalighi MM, Deller TW, Delso G, Jansen F. Design features and mutual compatibility studies of the time-of-flight PET capable GE SIGNA PET/MR system. IEEE Trans Med Imaging. 2016;35(8):1907–14.View ArticleGoogle Scholar Delso G, Ziegler S. PET/MRI system design. Eur J Nucl Med Mol Imag. 2009;36 Suppl 1:S86–92. https://doi.org/10.1007/s00259-008-1008-6.View ArticleGoogle Scholar Bai B, Li Q, Leahy RM. Magnetic resonance-guided positron emission tomography image reconstruction. Semin Nucl Med. 2013;43(1):30–44.View ArticleGoogle Scholar Judenhofer MS, Cherry SR. Applications for preclinical PET/MRI. Semin Nucl Med. 2013;43:19–29. https://doi.org/10.1053/j.semnuclmed.2012.08.004.View ArticlePubMedGoogle Scholar Conti M, Eriksson L. Physics of pure and non-pure positron emitters for PET: a review and a discussion. EJNMMI Phys. 2016;3(1):8. https://doi.org/10.1186/s40658-016-0144-5.View ArticlePubMedPubMed CentralGoogle Scholar Banerjee SR, Pomper MG. Clinical applications of Gallium-68. Appl Radiat Isot. 2013;76:2–13. https://doi.org/10.1016/j.apradiso.2013.01.039.View ArticlePubMedGoogle Scholar Afshar-Oromieh A, Zechmann CM, Malcher A, Eder M, Eisenhut M, Linhart HG, et al. Comparison of PET imaging with a 68Ga-labelled PSMA ligand and18F-choline-based PET/CT for the diagnosis of recurrent prostate cancer. Eur J Nucl Med Mol Imaging. 2014;41(1):11–20. https://doi.org/10.1007/s00259-013-2525-5.View ArticlePubMedGoogle Scholar Freitag MT, Radtke JP, Hadaschik BA, et al. Comparison of hybrid 68Ga-PSMA PET/MRI and 68Ga-PSMA PET/CT in the evaluation of lymph node and bone metastases of prostate cancer. Eur J Nucl Med Mol Imaging. 2016;43:70. https://doi.org/10.1007/s00259-015-3206-3.View ArticlePubMedGoogle Scholar Schmin DT, John H, Zweifel R, Cservenyak T, Westera G, Goerres GW, et al. Fluorocholine PET/CT in patients with prostate cancer: initial experience. Radiology. 2005;235:623–8. https://doi.org/10.1148/radiol.2352040494.View ArticleGoogle Scholar Andreyev A, Celler A. Dual-isotope PET using positron-gamma emitters. Phys Med and Biol. 2011;56:453956. https://doi.org/10.1088/0031-9155/56/14/020.View ArticleGoogle Scholar Cal-González J, Moore SC, Park MA, et al. Improved quantification for local regions of interest in preclinical PET imaging. Phys Med Biol. 2015;60(18):7127–49.View ArticleGoogle Scholar Velikyan I. Prospective of 68Ga-radiophamaceutical developments. Theranostics. 2014;4:47–80. https://doi.org/10.7150/thno.7447.View ArticleGoogle Scholar Pandey U, Mukherjee A, Sarma HD, Das T, Pillai MRA, Venkatesh M. Evaluation of 90Y-DTPA and 90Y-DOTA for potential application in intra-vascular radionuclide therapy. Appl Radiat Isot. 2002;57(3):313–8.View ArticleGoogle Scholar Carlier T, Willowson KP, Fourkal E, Bailey DL, Doss M, Conti M. 90Y -PET imaging: exploring limitations and accuracy under conditions of low counts and high random fraction. Med Phys. 2015;42:4295. https://doi.org/10.1118/1.4922685.View ArticlePubMedGoogle Scholar Rao AV, Akabani G, Rizzieri DA. Radioimmunotherapy for non-Hodgkin's lymphoma. Clin Med Res. 2005;3(3):157–65.View ArticleGoogle Scholar Lhommel R, Van Elmbt L, Goffette P, et al. Feasibility of 90Y TOF PET-based dosimetry in the liver metastasis therapy using SIR-spheres. Eur J Nucl Med Mol Imaging. 2010;37(9):1654–62.View ArticleGoogle Scholar National Electrical Manufacturers Association. NEMA Standards Publication NU 2–2007, Performance measurements of positron emission tomographs: Rosslyn; 2007. p. 26–33. https://www.nema.org. Accessed 14 Dec 2018 Ronald B, Ivo R, Thomas B, Gaspar D, Maqsood Y, Harald HQ, Bernhard S. Quality control for quantitative multicenter whole-body PET/MR studies: a NEMA image quality phantom study with three current PET/MR systems. Med Phys. 2015;42:5961. https://doi.org/10.1118/1.4930962.View ArticleGoogle Scholar Susanne Z, Bjoern WJ, Harald B, Daniel HP, Harald HQ. NEMA image quality phantom measurements and attenuation correction in integrated PET/MR hybrid imaging. Ziegler et al EJNMMI Physics. 2015;2:18. https://doi.org/10.1186/s40658-015-0122-3.View ArticleGoogle Scholar Braad PEN, et al. PET imaging with the non-pure positron emitters: 55Co, 86Y and 124I. Phys Med Biol. 2015;60(9):3479.View ArticleGoogle Scholar Grant AM, Deller TW, Khalighi MM, Maramraju SH, Delso G, Levin CS. NEMA NU 2-2012 performance studies for the SiPM-based ToF-PET component of the GE SIGNA PET/MR system. Med Phys. 2016;43:2334–43. https://doi.org/10.1118/1.4945416.View ArticlePubMedGoogle Scholar GE Healthcare: SIGNA PET/MR NEMA NU 2–2007 Manual. (2007). https://www.nema.org/Manufacturers/Pages/GE-Healthcare.aspx Accessed 11 Dec 2018.Google Scholar Kraus R, Delso G, Ziegler SI. Simulation study of tissue-specific positron range correction for the new biograph mMR. IEEE Trans Nucl Sci. 2012;59(5):1900–9. https://doi.org/10.1109/TNS.2012.2207436.View ArticleGoogle Scholar Shih-ying H, Dragana S, Jaewon Y, Uttam S, Youngho S. The effect of magnetic field on positron range and spatial resolution in an integrated whole-body time-of-flight PET/MRI system. IEEE Nucl Sci Symp Conf Rec Nov. 2014. https://doi.org/10.1109/NSSMIC.2014.7431006. Jon Shah N, Hans H, Christoph W, Lutz T, Joachim K, Liliana C, Elena RK, Syed MQ, Heinz HC, Hidehiro I. Effects of magnetic fields of up to 9.4T on resolution and contrast of PET images as measured with an MRBrainPET. PLoS One. 2014;9(4):e95250. https://doi.org/10.1371/journal.pone.0095250.View ArticlePubMedGoogle Scholar Spasic E, Jehanno N, Bblondeel-Gomes S, Huchet V, Luporsi M, Mounat TC. Phantom and clinical evaluation for new PET/CT reconstruction algorithm: Bayesian penalized likelihood reconstruction algorithm Q.Clear. J Nucl Med Radiat Ther. 2018;9(4):371. https://doi.org/10.4172/2155-9619.1000371.View ArticleGoogle Scholar Deller Timothy W, Khalighi MM, Jansen FP, Glover GH. PET imaging stability measurements during simultaneous pulsing of aggressive MR sequences on the SIGNA PET/MR system. J Nucl Med. 2018;59(1):167–72.View ArticleGoogle Scholar Ottavia BB, Afroditi E, Matteo C, Niccolò C, Nicola B, Charalampos T. PET iterative reconstruction incorporating an efficient positron range correction method. Physica Medica. 2016;32:323–30. https://doi.org/10.1016/j.ejmp.2015.11.005.View ArticleGoogle Scholar Hsu DFC, IIan E, Peterson WT. Studies of a next-generation silicon-photomultiplier-based time-of-flight PET/CT system. J Nucl Med. 2017;58(9):1511–8. https://doi.org/10.2967/jnumed.117.189514.View ArticlePubMedGoogle Scholar Wagadarikar AA, Adrian I, Sergei D, McDaniel DL. Sensitivity improvement of time-of-flight (ToF) PET detector through recovery of Compton scattered annihilation photons. IEEE Trans Nucl Sci. 2014;61(1):121–5.View ArticleGoogle Scholar Andrei I, et al. Simultaneous whole-body time-of-flight 18F-FDG PET/MRI: a pilot study comparing SUVmax with PET/CT and assessment of MR image quality. Clin Nucl Med. 2015;40(1):1–8. https://doi.org/10.1097/RLU.0000000000000611.View ArticleGoogle Scholar
CommonCrawl
Online ISSN 1534-7486; Print ISSN 1056-3911 Previous issue | This issue | Most recent issue | All issues | Next issue | Previous article | Recently published articles | Next article Deformation of tropical Hirzebruch surfaces and enumerative geometry Authors: Erwan Brugallé and Hannah Markwig Journal: J. Algebraic Geom. 25 (2016), 633-702 DOI: https://doi.org/10.1090/jag/671 Published electronically: June 2, 2016 Abstract | References | Additional Information We illustrate the use of tropical methods by generalizing a formula due to Abramovich and Bertram, extended later by Vakil. Namely, we exhibit relations between enumerative invariants of the Hirzebruch surfaces $\Sigma _n$ and $\Sigma _{n+2}$, obtained by deforming the first surface to the latter. Our strategy involves a tropical counterpart of deformations of Hirzebruch surfaces and tropical enumerative geometry on a tropical surface in three-space. Dan Abramovich and Aaron Bertram, The formula $12=10+2\times 1$ and its generalizations: counting rational curves on $\mathbf F_2$, Advances in algebraic geometry motivated by physics (Lowell, MA, 2000) Contemp. Math., vol. 276, Amer. Math. Soc., Providence, RI, 2001, pp. 83–88. MR 1837110, DOI https://doi.org/10.1090/conm/276/04512 Omid Amini, Matthew Baker, Erwan Brugallé, and Joseph Rabinoff, Lifting harmonic morphisms I: metrized complexes and Berkovich skeleta, Res. Math. Sci. 2 (2015), Art. 7, 67. MR 3375652, DOI https://doi.org/10.1186/s40687-014-0019-0 Omid Amini, Matthew Baker, Erwan Brugallé, and Joseph Rabinoff, Lifting harmonic morphisms II: Tropical curves and metrized complexes, Algebra Number Theory 9 (2015), no. 2, 267–315. MR 3320845, DOI https://doi.org/10.2140/ant.2015.9.267 D. Abramovich and C. Chen, Logarithmic stable maps to Deligne-Faltings pairs II. arXiv:1102.4531v2 Lars Allermann and Johannes Rau, First steps in tropical intersection theory, Math. Z. 264 (2010), no. 3, 633–670. MR 2591823, DOI https://doi.org/10.1007/s00209-009-0483-1 Benoît Bertrand, Erwan Brugallé, and Grigory Mikhalkin, Genus 0 characteristic numbers of the tropical projective plane, Compos. Math. 150 (2014), no. 1, 46–104. MR 3164359, DOI https://doi.org/10.1112/S0010437X13007409 Benoît Bertrand, Erwan Brugallé, and Grigory Mikhalkin, Tropical open Hurwitz numbers, Rend. Semin. Mat. Univ. Padova 125 (2011), 157–171. MR 2866125, DOI https://doi.org/10.4171/RSMUP/125-10 Arnaud Beauville, Complex algebraic surfaces, London Mathematical Society Lecture Note Series, vol. 68, Cambridge University Press, Cambridge, 1983. Translated from the French by R. Barlow, N. I. Shepherd-Barron and M. Reid. MR 732439 Pascale Harinck, Alain Plagne, and Claude Sabbah (eds.), Géométrie tropicale, Éditions de l'École Polytechnique, Palaiseau, 2008 (French). Papers from the Mathematical Days X-UPS held at the École Polytechnique, Palaiseau, May 14–15, 2008. MR 1500296 E. Brugallé and G. Mikhalkin, Floor decompositions of tropical curves in any dimension. In preparation, preliminary version available at the homepage http://www.math.jussieu.fr/$\sim$brugalle/articles/FDn/FDGeneral.pdf. E. Brugallé and G. Mikhalkin, Realizability of superabundant curves. In preparation. Erwan Brugallé and Grigory Mikhalkin, Floor decompositions of tropical curves: the planar case, Proceedings of Gökova Geometry-Topology Conference 2008, Gökova Geometry/Topology Conference (GGT), Gökova, 2009, pp. 64–90. MR 2500574 Erwan Brugallé and Kristin Shaw, Obstructions to approximating tropical curves in surfaces via intersection theory, Canad. J. Math. 67 (2015), no. 3, 527–572. MR 3339531, DOI https://doi.org/10.4153/CJM-2014-014-4 Y. Eliashberg, A. Givental, and H. Hofer, Introduction to symplectic field theory, Geom. Funct. Anal. Special Volume (2000), 560–673. GAFA 2000 (Tel Aviv, 1999). MR 1826267, DOI https://doi.org/10.1007/978-3-0346-0425-3_4 William Fulton, Introduction to intersection theory in algebraic geometry, CBMS Regional Conference Series in Mathematics, vol. 54, Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1984. MR 735435 Andreas Gathmann, Tropical algebraic geometry, Jahresber. Deutsch. Math.-Verein. 108 (2006), no. 1, 3–32. MR 2219706 Andreas Gathmann and Hannah Markwig, The Caporaso-Harris formula and plane relative Gromov-Witten invariants in tropical geometry, Math. Ann. 338 (2007), no. 4, 845–868. MR 2317753, DOI https://doi.org/10.1007/s00208-007-0092-4 Andreas Gathmann and Hannah Markwig, The numbers of tropical plane curves through points in general position, J. Reine Angew. Math. 602 (2007), 155–177. MR 2300455, DOI https://doi.org/10.1515/CRELLE.2007.006 Mark Gross and Bernd Siebert, Logarithmic Gromov-Witten invariants, J. Amer. Math. Soc. 26 (2013), no. 2, 451–510. MR 3011419, DOI https://doi.org/10.1090/S0894-0347-2012-00757-7 A. Gathmann, K. Schmitz, and A. Winstel, The realizability of curves in a tropical plane. arXiv:1307.5686 Eleny-Nicoleta Ionel, GW invariants relative to normal crossing divisors, Adv. Math. 281 (2015), 40–141. MR 3366837, DOI https://doi.org/10.1016/j.aim.2015.04.027 Eleny-Nicoleta Ionel and Thomas H. Parker, The symplectic sum formula for Gromov-Witten invariants, Ann. of Math. (2) 159 (2004), no. 3, 935–1025. MR 2113018, DOI https://doi.org/10.4007/annals.2004.159.935 Kunihiko Kodaira, Complex manifolds and deformation of complex structures, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 283, Springer-Verlag, New York, 1986. Translated from the Japanese by Kazuo Akao; With an appendix by Daisuke Fujiwara. MR 815922 Eugene Lerman, Symplectic cuts, Math. Res. Lett. 2 (1995), no. 3, 247–258. MR 1338784, DOI https://doi.org/10.4310/MRL.1995.v2.n3.a2 Jun Li, A degeneration formula of GW-invariants, J. Differential Geom. 60 (2002), no. 2, 199–293. MR 1938113 An-Min Li and Yongbin Ruan, Symplectic surgery and Gromov-Witten invariants of Calabi-Yau 3-folds, Invent. Math. 145 (2001), no. 1, 151–218. MR 1839289, DOI https://doi.org/10.1007/s002220100146 Hannah Markwig, Three tropical enumerative problems, Trends in mathematics, Universitätsdrucke Göttingen, Göttingen, 2008, pp. 69–96. MR 2906041 G. Mikhalkin, Phase-tropical curves I. Realizability and enumeration. In preparation. Grigory Mikhalkin, Amoebas of algebraic varieties and tropical geometry, Different faces of geometry, Int. Math. Ser. (N. Y.), vol. 3, Kluwer/Plenum, New York, 2004, pp. 257–300. MR 2102998, DOI https://doi.org/10.1007/0-306-48658-X_6 Grigory Mikhalkin, Decomposition into pairs-of-pants for complex algebraic hypersurfaces, Topology 43 (2004), no. 5, 1035–1065. MR 2079993, DOI https://doi.org/10.1016/j.top.2003.11.006 Grigory Mikhalkin, Enumerative tropical algebraic geometry in $\Bbb R^2$, J. Amer. Math. Soc. 18 (2005), no. 2, 313–377. MR 2137980, DOI https://doi.org/10.1090/S0894-0347-05-00477-7 Grigory Mikhalkin, Tropical geometry and its applications, International Congress of Mathematicians. Vol. II, Eur. Math. Soc., Zürich, 2006, pp. 827–852. MR 2275625 Grigory Mikhalkin and Andrei Okounkov, Geometry of planar log-fronts, Mosc. Math. J. 7 (2007), no. 3, 507–531, 575 (English, with English and Russian summaries). MR 2343146, DOI https://doi.org/10.17323/1609-4514-2007-7-3-507-531 B. Parker, Gromov-Witten invariants of exploded manifolds. arXiv:1102.0158 Jürgen Richter-Gebert, Bernd Sturmfels, and Thorsten Theobald, First steps in tropical geometry, Idempotent mathematics and mathematical physics, Contemp. Math., vol. 377, Amer. Math. Soc., Providence, RI, 2005, pp. 289–317. MR 2149011, DOI https://doi.org/10.1090/conm/377/06998 Kristin M. Shaw, A tropical intersection product in matroidal fans, SIAM J. Discrete Math. 27 (2013), no. 1, 459–491. MR 3032930, DOI https://doi.org/10.1137/110850141 Eugenii Shustin, Tropical and algebraic curves with multiple points, Perspectives in analysis, geometry, and topology, Progr. Math., vol. 296, Birkhäuser/Springer, New York, 2012, pp. 431–464. MR 2884046, DOI https://doi.org/10.1007/978-0-8176-8277-4_18 Bernd Sturmfels and Jenia Tevelev, Elimination theory for tropical varieties, Math. Res. Lett. 15 (2008), no. 3, 543–562. MR 2407231, DOI https://doi.org/10.4310/MRL.2008.v15.n3.a14 Ravi Vakil, Counting curves on rational surfaces, Manuscripta Math. 102 (2000), no. 1, 53–84. MR 1771228, DOI https://doi.org/10.1007/s002291020053 Magnus Dehli Vigeland, Smooth tropical surfaces with infinitely many tropical lines, Ark. Mat. 48 (2010), no. 1, 177–206. MR 2594592, DOI https://doi.org/10.1007/s11512-009-0116-2 Dan Abramovich and Aaron Bertram, The formula $12=10+2\times 1$ and its generalizations: counting rational curves on $\mathbf {F}_2$, Advances in algebraic geometry motivated by physics (Lowell, MA, 2000), Contemp. Math., vol. 276, Amer. Math. Soc., Providence, RI, 2001, pp. 83–88. MR 1837110 (2002f:14071), DOI https://doi.org/10.1090/conm/276/04512 Lars Allermann and Johannes Rau, First steps in tropical intersection theory, Math. Z. 264 (2010), no. 3, 633–670. MR 2591823 (2011e:14110), DOI https://doi.org/10.1007/s00209-009-0483-1 Arnaud Beauville, Complex algebraic surfaces, London Mathematical Society Lecture Note Series, vol. 68, Cambridge University Press, Cambridge, 1983. Translated from the French by R. Barlow, N. I. Shepherd-Barron and M. Reid. MR 732439 (85a:14024) Géométrie tropicale, Éditions de l'École Polytechnique, Palaiseau, 2008 (French). Papers from the Mathematical Days X-UPS held at the École Polytechnique, Palaiseau, May 14–15, 2008; Edited by Pascale Harinck, Alain Plagne and Claude Sabbah. MR 1500296 (2010a:14003) Erwan Brugallé and Grigory Mikhalkin, Floor decompositions of tropical curves: the planar case, Proceedings of Gökova Geometry-Topology Conference 2008, Gökova Geometry/Topology Conference (GGT), Gökova, 2009, pp. 64–90. MR 2500574 (2011e:14111) Y. Eliashberg, A. Givental, and H. Hofer, Introduction to symplectic field theory, GAFA 2000 (Tel Aviv, 1999), Geom. Funct. Anal. Special Volume (2000), 560–673. MR 1826267 (2002e:53136), DOI https://doi.org/10.1007/978-3-0346-0425-3_4 William Fulton, Introduction to intersection theory in algebraic geometry, CBMS Regional Conference Series in Mathematics, vol. 54, published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1984. MR 735435 (85j:14008) Andreas Gathmann, Tropical algebraic geometry, Jahresber. Deutsch. Math.-Verein. 108 (2006), no. 1, 3–32. MR 2219706 (2007e:14088) Andreas Gathmann and Hannah Markwig, The Caporaso-Harris formula and plane relative Gromov-Witten invariants in tropical geometry, Math. Ann. 338 (2007), no. 4, 845–868. MR 2317753 (2008e:14075), DOI https://doi.org/10.1007/s00208-007-0092-4 Andreas Gathmann and Hannah Markwig, The numbers of tropical plane curves through points in general position, J. Reine Angew. Math. 602 (2007), 155–177. MR 2300455 (2008a:14073), DOI https://doi.org/10.1515/CRELLE.2007.006 Eleny-Nicoleta Ionel and Thomas H. Parker, The symplectic sum formula for Gromov-Witten invariants, Ann. of Math. (2) 159 (2004), no. 3, 935–1025. MR 2113018 (2006b:53110), DOI https://doi.org/10.4007/annals.2004.159.935 Kunihiko Kodaira, Complex manifolds and deformation of complex structures, with an appendix by Daisuke Fujiwara, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 283, Springer-Verlag, New York, 1986. Translated from the Japanese by Kazuo Akao. MR 815922 (87d:32040) Eugene Lerman, Symplectic cuts, Math. Res. Lett. 2 (1995), no. 3, 247–258. MR 1338784 (96f:58062), DOI https://doi.org/10.4310/MRL.1995.v2.n3.a2 Jun Li, A degeneration formula of GW-invariants, J. Differential Geom. 60 (2002), no. 2, 199–293. MR 1938113 (2004k:14096) An-Min Li and Yongbin Ruan, Symplectic surgery and Gromov-Witten invariants of Calabi-Yau 3-folds, Invent. Math. 145 (2001), no. 1, 151–218. MR 1839289 (2002g:53158), DOI https://doi.org/10.1007/s002220100146 Grigory Mikhalkin, Amoebas of algebraic varieties and tropical geometry, Different faces of geometry, Int. Math. Ser. (N. Y.), vol. 3, Kluwer/Plenum, New York, 2004, pp. 257–300. MR 2102998 (2005m:14110), DOI https://doi.org/10.1007/0-306-48658-X_6 Grigory Mikhalkin, Decomposition into pairs-of-pants for complex algebraic hypersurfaces, Topology 43 (2004), no. 5, 1035–1065. MR 2079993 (2005i:14055), DOI https://doi.org/10.1016/j.top.2003.11.006 Grigory Mikhalkin, Enumerative tropical algebraic geometry in $\mathbb {R}^2$, J. Amer. Math. Soc. 18 (2005), no. 2, 313–377. MR 2137980 (2006b:14097), DOI https://doi.org/10.1090/S0894-0347-05-00477-7 Grigory Mikhalkin, Tropical geometry and its applications, International Congress of Mathematicians. Vol. II, Eur. Math. Soc., Zürich, 2006, pp. 827–852. MR 2275625 (2008c:14077) Grigory Mikhalkin and Andrei Okounkov, Geometry of planar log-fronts, Mosc. Math. J. 7 (2007), no. 3, 507–531, 575 (English, with English and Russian summaries). MR 2343146 (2008g:14110) Jürgen Richter-Gebert, Bernd Sturmfels, and Thorsten Theobald, First steps in tropical geometry, Idempotent mathematics and mathematical physics, Contemp. Math., vol. 377, Amer. Math. Soc., Providence, RI, 2005, pp. 289–317. MR 2149011 (2006d:14073), DOI https://doi.org/10.1090/conm/377/06998 Bernd Sturmfels and Jenia Tevelev, Elimination theory for tropical varieties, Math. Res. Lett. 15 (2008), no. 3, 543–562. MR 2407231 (2009f:14124), DOI https://doi.org/10.4310/MRL.2008.v15.n3.a14 Ravi Vakil, Counting curves on rational surfaces, Manuscripta Math. 102 (2000), no. 1, 53–84. MR 1771228 (2001h:14069), DOI https://doi.org/10.1007/s002291020053 Magnus Dehli Vigeland, Smooth tropical surfaces with infinitely many tropical lines, Ark. Mat. 48 (2010), no. 1, 177–206. MR 2594592 (2011e:14112), DOI https://doi.org/10.1007/s11512-009-0116-2 Erwan Brugallé Affiliation: Université Pierre et Marie Curie, Paris 6, 4 place Jussieu, 75 005 Paris, France – and – CMLS, École polytechnique, CNRS, Université Paris-Saclay, 91128 Palaiseau Cedex, France Email: [email protected] Hannah Markwig Affiliation: Universität des Saarlandes, Fachrichtung Mathematik, Postfach 151150, 66041 Saarbrücken, Germany Address at time of publication: Eberhard Karls Universität Tübingen, Arbeitsbereich Geometrie, Auf der Morgenstelle 10, 72076 Tübingen, Germany Email: [email protected] Received by editor(s): July 11, 2013 Received by editor(s) in revised form: May 28, 2014 Article copyright: © Copyright 2016 University Press, Inc.
CommonCrawl
greater and smaller infinity? Consider the following two expressions: $$\sum^{\infty}_{i=1}\frac{1}{i}$$ and $$\lim_{h\to90 h<90}\tan 90º$$ They both equal to infinity. I remember my teacher told me there are more real numbers than whole numbers. So $\infty>\infty$ is possible. But how do I know if two expressions that both equal to infinity are equal or not (assume there isn't an obvious bijection between them)? abc... abc...abc... $\begingroup$ No, "$\infty>\infty$" makes no sense. And there are not "more rational numbers than integers". The rational numbers can still be counted. But there are "more real numbers than integers" because the set of reals is uncountable. $\endgroup$ – Peter Jun 8 '18 at 7:14 $\begingroup$ @Peter But there are infinity many real numbers and infinity many integers. $\endgroup$ – abc... Jun 8 '18 at 7:21 $\begingroup$ Yes, but there is no bijection between them. Cantor showed this with his diagonal argument. This is an important result in set theory. $\endgroup$ – Peter Jun 8 '18 at 7:23 I think you're confusing two different concepts related to infinity - cardinality and limits. Your teacher probably taught you that there are "more" real numbers than rational numbers (that's not the case for whole and rational numbers, by the way) because both infinite sets have different cardinality. Meaning, you can't make a one-to-one correspondence between the reals and the rationals, so one might say that the infinity of the real numbers is "bigger" than the infinity of the rational numbers. Infinite cardinals are about functions that map from one infinite set to another. But the meaning of an infinite limit is different, and isn't really related to the concept of cardinality. When we say that the limit of a sequence is infinity, we mean that for any number you choose, starting some point, all members of the sequence are greater than that number. To put it simply, it means that the sequence gets bigger and bigger, tending "towards" infinity. The two terms relate to infinity in different ways - the first talks about how infinite sets relate to one another, and the second talks about how sequences and functions behave under certain conditions. That said, there's no such thing as the cardinality of a limit, or the limit of a cardinality, so you can't really compare the two. If you find this topic interesting, I'd recommend you to start reading some basic texts on elementary set theory and real analysis. I think it might help you get a deeper understanding of these ideas. GSoferGSofer $\begingroup$ Superb answer! (+1) $\endgroup$ – Peter Jun 8 '18 at 9:47 Not the answer you're looking for? Browse other questions tagged infinity or ask your own question. How many different sizes of infinity are there? There is no smallest infinity in calculus? If there's only two infinities, why isn't Calculus affected? Uncountable vs Countable Infinity Infinity is not a number so why it has (plus\minus) sign. Negative infinity equals positve infinity? Evaluating limits at positive and negative infinity What is $\displaystyle\lim_{x\to\infty}1^x$? Is this limit equivalent to $\displaystyle\lim_{x\to\infty}(e^{2i\pi})^x$? Can one infinity be greater than other? Dividing Complex Numbers by Infinity
CommonCrawl
Environmental Processes June 2014 , Volume 1, Issue 2, pp 95–103 | Cite as The Determination of Single and Mixture Toxicity at High Concentrations of Some Acidic Pharmaceuticals via Aliivibrio fischeri Ayşe Handan Dökmeci Ismet Dökmeci Hilmi Ibar Pharmaceutical and Personal Care Products (PPCPs) may cause serious and significant environmental pollution. Environmental analyses have detected pharmaceuticals in addition to conventional chemical pollutants. In our study, the acute toxicity of ibuprofen, naproxen, diclofenac, salicylic acid and substances that are mixtures themselves (INDS) on Aliivibrio fischeri bacterium were assessed with the use of ToxAlert®. The selected materials are acidic in nature, highly polarized and widely sold over-the-counter analgesics. Dose-response curves were drawn, and linear regression analyses and probit analyses determined their 50 %-effective concentrations (EC50). The pharmaceuticals alone are unlikely to have acute impacts in aquatic environments. However, when evaluated in combination with A. fischeri bacterium, the acute toxicity of the INDS mixture was EC50 7.09 ± 5.1 mg/L and the acute toxicity of TUs was 14.10, which indicates their very toxic quality for organisms. Since the target components do not exist in isolation, we should primarily consider the toxicity of the mixture only at high concentrations. Ibuprofen Naproxen Diclofenac Salicylic acid Mixture toxicity Aliivibrio fischeri What happens to drugs after being used? Are they completely used up by our bodies, or are they partially disposed via excretion? Where do unused and stored drugs end up? Do drugs affect other organisms in the environment? Pharmaceutical and Personal Care Products (PPCPs) are ubiquitous, and raise serious and significant problem of continuous and repetitive environmental pollution (Daughton and Ternes 1999). As a result of the marketing of new medicines and personal care products and their detection in the aquatic environment, the effects of PPCPs, in combination with other chemicals, has become a very pertinent topic. Although the concentrations of the medicines in the environment are very low (ng/L or ppt), as they remain part of a mixture, they may reduce or cancel the effect of each other (antagonistic effect) or they may increase the effect of one another (synergistic effect). The synergistic effects of medicines may create a potential risk to non-targeted organisms. Recent studies have shown that very little is known about the long-term impacts of drugs on aquatic organisms when biological goals are considered. The same is also true for the impacts of compounds that are able to exist as mixtures. An important reason for the interest in medical substances in the environment is their purpose of creating biological effects. Drugs are lipophilic in order to penetrate cell membranes, and they can be hydrolyzed in the acidic pH of the stomach. They are persistent, and have high mobility in the liquid phase. Due to these characteristics, drug-active substances or metabolites can be bio-accumulated and cause effects both in aquatic and terrestrial ecosystems (Ternes et al. 1998; Fatta et al. 2007). Although drugs are designed to target specific metabolic or molecular pathways in humans and animals, the majority of them have significant side effects. They may have toxic effects in various forms such as narcosis, polar narcosis, apnea, acetylcholinesterase inhibition, membrane irritation, or paralysis of the central nerve system (McCarty and MacKay 1993). Drug residues do not exist as polluters solely in aquatic environments—they generally exist as compounds. Therefore, we need to scientifically assess the risks of those complex compounds to aquatic organisms as well as the organisms' level of exposure to them. Ecotoxicological studies have been conducted for more than two decades, to determine the combined effects and risks of various substances (Altenburger et al. 2000; Backhaus et al. 2000; Faust et al. 2001). Generally, there are two different concepts to estimate the toxicity of a mixture: (1) Total concentration; and (2) Independent impact. The concentrations of individual compounds in a mixture will potentially have a higher combined effect if they comply with the concept of "the total concentration of compounds". Another significant point in the concept of total concentration, is its contribution to the total impact of the mixture even in applications that are below individually unobserved (neutral) concentrations of the substance. The activity form of this type of compound is known as narcosis. The potential of a chemical to cause narcosis depends on its hydrophobic state, and is widely defined as a "n-octanol water allocation quotient" (logK ow ) (Cleuvers 2003). Compounds defined as logK ow > 3 have a relatively high capacity for bioaccumulation in the organism's tissues (Sanderson et al. 2003). The increasing and widespread use of eco-toxicity tests have started to gain similar importance to chemical analyses in the control of water pollution. Such tests generally use prokaryotic and eukaryotic organisms. Of these, plants, algae, bacteria and shelled organisms are widely used in toxicity tests (Radix et al. 2000; Castillo et al. 2001; Cleuvers 2004; Schnell et al. 2009; Dietrich et al. 2010). In our study, sea bacteria (A. fischeri) was used as a test organism in a bioluminescene inhibition test. More recently, bio-tests with bioluminescence bacteria have been the focus of increased interest, because, although toxicity mechanisms differ greatly, any substance that has a toxic effect on one organism is noted to have a similar effect on others. Thus, the identification of luminescence inhibition is representative of the impact of a substance on more complex organisms (Ren and Frymier 2002; Parvez et al. 2006). The standard bioluminescence inhibition test is done in accordance with either ISO 11348-3 or DIN 38412 L34 standards. Inhibitions of A. fischeri bacterium after exposure to various diluted solutions are calculated from their EC50 rates using probit and linear regression analysis. The purpose of this study is to assess the single substance and combined toxicities present in high concentrations of naproxen, diclofenac, ibuprofen and salicylic acid (acetyl salicylic acid metabolite) via A. fischeri. 2 Materials and Methods 2.1 Toxicity Testing of Pharmaceuticals Ibuprofen, diclofenac, naproxen, salicylic acid, NaOH, HCl and NaCl were all purchased from Sigma–Aldrich (Merck, Darmstadt, Germany; purity >98 %). We studied the single substance and mixture toxicity of naproxen, ibuprofen, diclofenac and salicylic acid using a long-term bioluminescence inhibition test with the marine bacterium A. fischeri acting as the test organism according to the protocol of the International Standard Organization (ISO 11348-3 1998). Samples of 100 mg/L were prepared from naproxen, ibuprofen, diclofenac, salicylic acid and the INDS mixture. Due to the low solubility of the drugs in water, stock solutions were prepared in 1 % ethanol/water, which have a non-effective solubility for luminescence bacteria. It is recommended to carry out toxicity tests at pH 6–8 with a sodium chlorine solution of 20 % or higher salinity (Onorati and Mecozzi 2003). Therefore, using 1 N NaOH and 0.1 N HCl, the pH value of each stock solution was adjusted to the range 6–8, the optimum interval for the organism. In order to provide the relevant osmotic pressure for test organisms, the salinity concentration of the stock solution was adjusted by 2 % for NaCl. Measurements were performed using Merck ToxAlert® 100 kits. The prepared stock solutions were diluted at 80 %, 50 %, 25 %, 12.5 %, 6.75 %, 3.2 % and 0.8 %. Reactivation solutions at room temperature were mixed well, and were left for a minimum of 15 min after adding 12.5 mL reactivation solutions to the vial. The experiments used A. fischeri bacteria that were liquid dried and frozen at −20 °C. Samples of 0.5 mL were transferred from the reactivation solution in the equipment's reservoir to the liquid dried bacteria; they were then gently agitated and allowed to stand for 15 min. The bacteria and the diluted solutions were made ready for the tests. All tests were performed in duplicate. Data evaluation was performed according to ISO 11348-3 (Eq. 1), which resulted from the contrast of nontoxic free control samples within a 30-minute exposure duration and the concentration-effect curves were analyzed as described below. Bioluminescence inhibition: $$ \%=1-\left[1-\left(\mathrm{light}\ {\mathrm{intensity}}_{\mathrm{sample}}/\mathrm{light}\ {\mathrm{intensity}}_{\mathrm{control}}\right.\right]\times 100 $$ Effective concentration (EC50) was defined as the sample concentration that resulted in a 50 % reduction in luminescence. The EC50 rates of samples were measured through the curve that represented the dilatation factor of the inhibition percentage rates of samples (ASTMD 1996). Following assessment of EC50 rates, the Toxic Unit (TU) dosage was determined. The Toxic Unit for Standard Material was calculated with reference to Sprague and Ramsay's (1965) formula: (TUs) = (EC50)−1 × 100. Regarding this scale: if TU = 0, the sample is non-toxic (nt); if TU <1, it is slightly toxic (st); if TU is 1–10, it is classified as toxic (t); if TU is 11–100, the sample is very toxic (vt) and if TU >100, then the sample is extremely toxic (et). A toxicity-unit (TU) scale was used to assess the toxicity of the tested samples. A dose-response chart was drawn for ibuprofen, naproxen, diclofenac, salicylic acid and INDS, as shown in Fig. 1, and EC50 rates were found. Bioluminescence inhibition (%) for tested compounds ToxAlert 100® Regression equations and probit analyses were used to determine the Toxic Unit (TUs) for EC50 rates. According to the regression results, the EC50 rates were found to be 39.93 ± 2.2, 47.07 ± 1.45, 16.31 ± 0.72, 52.64 ± 2.35 and 5.13 ± 8.1 mg/L for ibuprofen, naproxen, diclofenac, salicylic acid and INDS, respectively, compared with the TU rates of 2.50, 2.03, 6.10, 1.89 and 19.49, respectively. Probit analysis indicated that EC50 was 39.00 ± 0.8, 53.35 ± 7.9, 11.79 ± 1.75, 54.39 ± 8.9 and 7.09 ± 5.1 mg/L, respectively, while TU rates were 2.56, 1.87, 8.48, 1.83 and 14.10, respectively (Table 1). EC50 and TU rates of drug components after regression and probit analyses Drug components EC50 (mg/L) (Regression analysis) EC50 (mg/L) (Probit analysis) Log Kow 39.93 ± 2.2 39 ± 0.8 47.07 ± 1.45 INDS 5.13 ± 8.1 There was no significant difference between the TU rates obtained from those two analyses. The toxicity assessment showed that individual components were toxic, whereas the mixture was categorized as highly toxic. In a bioluminescence inhibition test using A. fischeri for ibuprofen, Farré et al. (2001) reported that EC50 rates were 12.1 and 19.1 mg/L, while Jin Sung Ra et al. (2008) found it to be 37.5 mg/L. In our study, probit analysis determined EC50 to be 39 ± 0.8 mg/L, which is equivalent to a toxic TU rating for the organism. In a bioluminescence inhibition test using A. fischeri for diclofenac, Ferrari et al. (2003) stated that EC50 was 11.45 mg/L and diclofenac compound was very sensitive in A. fischeri compared with other organisms. Farré et al. (2001) reported that EC50 rates for diclofenac were 13.3 and 13.7 mg/L, whereas Ra et al. (2008) found EC50 to be 9.7 mg/L. In our study, probit analysis determined EC50 to be 11.79 ± 1.75 mg/L, which is classified as being toxic for organisms. Compared with other target components, the most toxic is diclofenac. In bioluminescence inhibition test using A. fischeri for salicylic acid, Farré et al. (2001) found EC50 to be 41.3 mg/L, whereas Henschel et al. (1997) noted it to be 90 mg/L. In another study, the lowest EC50 rate of salicylic acid was reported as 37 mg/L, while the highest concentration in the aqueous environment was 60 μg/L (Karl et al. 2005). In our study, probit analysis results confirmed that EC50 was 54.39 ± 8.9 mg/L, which is classified as toxic for organisms. In a bioluminescence inhibition test using A. fischeri for naproxen, Farré et al. (2001) confirmed EC50 to be 21.2 and 35.6 mg/L. In our study, probit analysis determined EC50 to be 53.35 ± 7.9 mg/L, which is classified as toxic for organisms. Many studies have been performed on the acute toxic effects of medicines on non-targeted organisms, and the effects of medicines in isolation and in mixtures have been compared. According to the research results, the toxic effects of medicine mixtures are more complex and unpredictable compared to the toxic effects of single medicine (Dietrich et al. 2010; Flaherty and Dodson 2005; Cleuvers 2003, 2004). Dietrich et al. (2010) stated that Daphnia magna exposed to karbamazepine, diclofenac, 17 α-ethinylestradiol and metoprolol, did not experience a stronger effect when these medicines were applied as a mixture rather than individually. In contrast to this study, Cleuver (2003, 2004) examined the ecotoxic potentials of anti-inflammatory drugs in addition to their various activities in different biotest groups with various aquatic organisms. The NSAID mixture (diclofenac, ibuprofen, naproxen, acetylsalicylic acid) was analyzed in acute Daphnia and algae tests. Although one sole compound either had no effect or only a slight effect, the mixture was determined to be toxic. Schnell et al. (2009) showed that the toxic effect of various therapeutic medicine mixtures is more significant than previously estimated on the liver cells of rainbow trout due to the synergistic effects of the compound. Lang and Kohidai (2012) stated that medicines (EC50 26.56 ± 2.56 mg/L for DIC, EC50 46.79 ± 1.78 mg/L for IBU) do not have a potential acute toxic effect on Tetrahymena pyriformis although they are individually present in high concentrations. They reported that IBU and DIC components at the lowest concentration (0.25 TU diclofenac +0.25 TU ibuprofen) show additive effects when they are together, while they showed antagonistic effect in higher concentrations. Reasons for the inconsistent results in their study are that they used different species (Zhang et al. 2010) and different medicines interacting in the same or the opposite direction (Dietrich et al. 2010; Cleuvers 2004) Therefore, the existing studies on mixture toxicity should be expanded upon. Annual consumption of ibuprofen is estimated to be in hundreds of tonnes (Koutsouba et al. 2003). Many studies have reported that ibuprofen is representative of anti-rheumatic medications (Wiegel et al. 2004). Studies have shown that 14 % of consumed ibuprofen does not undergo change (Daughton and Ternes 1999). Almost every water sample taken within Europe is reported to have detectable levels of this active substance, because it is very frequently prescribed and its use is pervasive (Andreozzi et al. 2003; Rodriguez et al. 2003). Less than 1 % of the diclofenac compound is excreted from the body. Its half-life t1/2 is 4 h. Entering the aqueous environment, it is decomposed rapidly by photodegradation in less than 24 h (Buser et al. 1998; Ayscough et al. 2000). Despite these characteristics, diclofenac has been specified as a very significant active drug component, both in surface waters and wastewater samples during long-term monitoring studies (Heberer 2002). Acetylsalicylic acid is used as an analgesic and anti-inflammatory, and is one of the most widely used pharmaceuticals. By 2000, sales had reached 1000 tonnes per annum in Europe (Heberer 2002; Dietrich et al. 2002). Its rate of use in the UK is 18 tonnes per annum (Jones et al. 2002). In a study conducted in Turkey, salicylic acid was detected at 18.74 ± 3.3 ng/L in surface waters (Dökmeci et al. 2013). Naproxen is an over-the-counter anti-inflammatory drug that is commonly used in medical and veterinary sciences (Metcalfe et al. 2003). It is known that 60 % of the dosage is excreted from the body without being metabolized, and it is relatively persistent in the environment (Kosjek et al. 2005). A study in Turkey detected 15.58 ± 1.5 ng/L of naproxen in surface waters (Dökmeci et al. 2013). In Turkey, by 2008, the prescription (Rx) drug market was valued at 12 billion TRY (9.3 US dollars) annually, following a 9 % growth, and box sales had reached 1.38 billion after an expansion of 5 %. Antibiotics rank first in box sales. As presented in Fig. 2, non-steroidal anti-inflammatory drugs (NSAIDs) have the same market share as cold and flu medicines. Similarly to Turkey, over-the-counter sales of NSAIDs reach hundreds of tonnes in many other countries. Drug consumption by treatment medication category in Turkey (2007–2008) http://www.ieis.org.tr The components selected in our study are medicine groups that are frequently consumed and that are sold without a list of ingredients. The studies show that these medicine components are individually present in low concentrations in the receiving environment. However, as other components are also present in the receiving environment, the threat posed by mixture toxicity should not be ignored. All the NSAIDs utilized in our study have diverse biological effects on various cellular sources, such as vasodilatation, bronchoconstriction, natural killer activation inhibitions, suppressor T modulation, platelet activity inhibition, platelet aggregation and vasoconstriction (Dökmeci 2007). As a result of the disposal of these drugs, which are intended to produce specific effects in medicine or veterinary science, there will be undesirable side-effects for organisms in the receiving environment. The individual acute toxicity alignment of ibuprofen, naproksen, diclofenac and salicylic acid, according to EC50 values, were found to be: DIC > IBU > NAP > SA. When diclofenac was compared with other components in acute toxicity alignment, the EC50 value (11.79 ± 1.75) was found to be very close to the EC50 value which we determined for the INDS mixture. The EC50 rate of the mixture of ibuprofen, naproxen, diclofenac and salicylic acid together was 7.09 ± 5.1 mg/L, which is classified as very toxic. These results are consistent with the findings by Cleuvers (2004), who reported in a QSAR approach-based study that NSAIDs acted on Daphnia and algae by nonpolar narcosis and that the higher the logK ow of the substance, the higher was its toxicity. However, Rudzok et al. (2011) found a weak bond between toxicity and the log K ow values of the organic polluters. This difference in the studies is considered to originate from the structure of the chemical substance. 5 Conclusions Drug components are not toxic at their existing concentrations in the environment; however, they might become toxic as mixtures, due to what is termed the synergistic impact or total impact. It is not possible to determine the acute toxic effects of medicine mixtures in receiving environments by taking their individual effects as a basis. Our knowledge about drug residues in aquatic environments indicates that they will not solely present a risk for acute toxicity. Since these components do not exist in isolation but coexist with diverse organic and inorganic chemicals, it is necessary to monitor these drug components in wastewater discharge effluents and to optimize wastewater treatment plants. Furthermore, there are very few studies about the chronic toxicities of these chemicals on the organisms in the receiving environment. The development of these studies is very important in terms of eco-toxicology science as a whole. This study was supported by Trakya University Scientific Research Project TÜBAP-120. The authors are grateful to the TUBITAK Marmara Research Center for conducting the toxicity measurements. An initial version of this paper was presented at the 8th International Conference of the EWRA in Porto, Portugal, June 26–29, 2013. Altenburger R, Backhaus T, Boedeker W, Faust M, Scholze M, Grimme LH (2000) Predictability of the toxicity of multiple chemical mixtures to A. fischeri: mixtures composed of similarly acting chemicals. Environ Toxicol Chem 19:2341–2347CrossRefGoogle Scholar Andreozzi R, Raffaele M, Nicklas P (2003) Pharmaceuticals in STP effluents and their solar photodegradation in aquatic environment. Chemosphere 50(10):1319–1330CrossRefGoogle Scholar ASTMD 5660 (1996) Standard Test Method for Assessing the Microbial Detoxification of Chemically Contaminated Water and Soil Using a Toxicity Test with a Luminescent Marine Bacterium. American Society for Testing and Materials International 1–9Google Scholar Ayscough NJ, Fawell J, Franklin G, Young W (2000) Review of human pharmaceuticals in the environment R&D technical report. Environ Agency, BristolGoogle Scholar Backhaus T, Altenburger R, Boedeker W, Faust M, Scholze M, Grimme LH (2000) Predictability of the toxicity of a multiple mixture of dissimilarly acting chemicals to Vibrio fischeri. Environ Toxicol Chem 19:2348–2356CrossRefGoogle Scholar Buser HR, Poiger T, Müller MD (1998) Occurrence and fate of the pharmaceutical drug diclofenac in surface waters: rapid photodegradation in a lake. Environ Sci Technol 32(22):3449–3456CrossRefGoogle Scholar Castillo M, Alonso MC, Riu J, Reinke M, Klöter G, Dizer H, Fischer B, Hansen PD, Barcelo D (2001) Identification of cytotoxic compounds in European wastewaters during a field experiment. Anal Chim Acta 426:265–277CrossRefGoogle Scholar Cleuvers M (2003) Aquatic ecotoxicity of pharmaceuticals including the assessment of combination effects. Toxicol Lett 142:185–194CrossRefGoogle Scholar Cleuvers M (2004) Mixture toxicity of the anti-inflammatory drugs diclofenac, ibuprofen, naproxen, and acetylsalicylic acid. Ecotoxicol Environ Saf 59:309–315CrossRefGoogle Scholar Daughton CG, Ternes TA (1999) Pharmaceuticals and personal care products in the environment: agents of subtle change? Environ Health Perspect 107:907–938CrossRefGoogle Scholar Dietrich DR, Webb SF, Petry T (2002) Hot spot pollutants: pharmaceuticals in the environment. Toxicol Lett 131:1–3CrossRefGoogle Scholar Dietrich S, Ploessl F, Bracher F, Laforsch C (2010) Single and combined toxicity of pharmaceuticals at environmentally relevant concentrations in Daphnia magna–A multigenerational study. Chemosphere 79:60–66CrossRefGoogle Scholar Dökmeci I (2007) Farmakoloji, İlaçlar ve etkileri. Alfa yayınevi, IstanbulGoogle Scholar Dökmeci AH, Sezer K, Dökmeci İ, Ibar H (2013) Determination of selected acidic pharmaceuticals and caffeine in Ergene basin, in Turkey. Glob NEST J 15:431–439Google Scholar Farré M, Ferrer I, Ginebreda A, Figueras M, Olivella L, Tirapu L, Vilanova M, Barcelo D (2001) Determination of drugs in surface water and wastewater samples by liquid chromatography-mass spectrometry: methods and preliminary results including toxicity studies with Vibrio fischeri. J Chromatogr A 938:187–197CrossRefGoogle Scholar Fatta D, Nikolaou A, Achilleos A, Meric S (2007) Analaytical methods for tracing pharmaceutical residues in water and wastewater. Trends Anal Chem 26:515–533CrossRefGoogle Scholar Faust M, Altenburger R, Backhaus T, Blanck H, Boedeker W, Gramatica P, Hamer V, Scholze M, Vighi M, Grimme LH (2001) Predicting the joint algal toxicity of multicomponent s -triazine mixtures at low−effect concentrations of individual toxicants. Aquat Toxicol 56:13–32CrossRefGoogle Scholar Ferrari B, Paxeus N, Lo Giudice R, Pollio A, Garric J (2003) Ecotoxicological impact of pharmaceuticals found in treated wastewaters: study of carbamazepine clofibric acid and diclofenac. Ecotoxicol Environ Saf 55:359–370CrossRefGoogle Scholar Flaherty CM, Dodson SI (2005) Effects of pharmaceuticals on Daphnia survival, growth, and reproduction. Chemosphere 61:200–207CrossRefGoogle Scholar Heberer T (2002) Occurrence fate and removal of pharmaceutical residues in the aquatic environment: a review of recent research data. Toxicol Lett 131:5–17CrossRefGoogle Scholar Henschel KP, Wenzel A, Diedrich M, Fliedner A (1997) Environmental hazard assessment of pharmaceuticals. Regul Toxicol Pharmacol 25:220–225CrossRefGoogle Scholar International Standard Organisation (1998) Water-quality determination of the inhibitory effect of water samples on the light emission of Vibrio fischeri (luminescent bacteria test) EN ISO 11348-3 GenevaGoogle Scholar Jones OA, Voulvoulis N, Lester JN (2002) Aquatic environmental assessment of the top 25 English prescription pharmaceuticals. Water Res 36:5013–5022CrossRefGoogle Scholar Karl F, Anna WA, Daniel C (2005) Ecotoxicology of human pharmaceuticals: a review. Aquat Toxicol 76:122–159Google Scholar Kosjek T, Heath E, Krbavcic A (2005) Determination of non-steroidal anti-inflammatory drug (NSAIDs) residues in water samples. Environ Int 31:679–685CrossRefGoogle Scholar Koutsouba V, Heberer T, Fuhrmann B, Schmidt-Baumler K, Tsipi D, Hiskia A (2003) Determination of polar pharmaceuticals in sewage water of Greece by gas chromatography–mass spectrometry. Chemosphere 51:69–75CrossRefGoogle Scholar Lang J, Kohidai L (2012) Effects of the aquatic contaminant human pharmaceuticals and their mixtures on the proliferation and migratory responses of the bioindicator freshwater ciliate Tetrahymena. Chemosphere 89:592–601CrossRefGoogle Scholar McCarty LS, MacKay D (1993) Enhancing ecotoxicological modelling and assessment. Environ Sci Technol 27:1719–1728CrossRefGoogle Scholar Metcalfe CD, Miao XS, Koenig BG, Struger J (2003) Distribution of acidic and neutral drugs in surface waters near sewage treatment plants in the lower Great Lakes Canada. Environ Toxicol Chem 22:2881–2889CrossRefGoogle Scholar Onorati F, Mecozzi M (2003) Effects of two diluents in the Microtox toxicity bioassay with marine sediments. Chemosphere 54:679–687CrossRefGoogle Scholar Parvez S, Venkataraman C, Mukherji S (2006) A review on advantages of implementing luminesence inhibition test (Vibrio fischeri) for acute toxicity prediction of chemicals. Environ Int 32:256–268CrossRefGoogle Scholar Ra JS, Oh SY, Lee BC, Kim SD (2008) The effect of suspended particles coated by humic acid on the toxicity of pharmaceuticals estrogens and phenolic compounds. Environ Int 34:184–192CrossRefGoogle Scholar Radix P, Leonard M, Papantoniou C, Roman G, Saouter E, Gallotti-Schmitt S, Thiebaund H, Vasseur P (2000) Comparison of four chronic toxicity tests using algae, bacteria and intertebrates assessed with sixteen chemicals. Ecotoxicol Environ Saf 47:186–194CrossRefGoogle Scholar Ren S, Frymier PD (2002) Estimating the toxicities of organic chemicals to bioluminesencent bacteria and activated sludge. Water Res 36:4406–4414CrossRefGoogle Scholar Rodriguez I, Quintana JB, Carpinteiro J, Carro AM, Lorenzo RA, Cela R (2003) Determination of acidic drugs in sewage water by GC-MS as tert-butyldimethylsilyl derivatives. J Chromatogr A 985:265–274CrossRefGoogle Scholar Rudzok S, Krejci S, Graebsch C, Herbarth O, Mueller A, Bauer M (2011) Toxicity profiles of four metals and 17 xenobiotics in the human hepatoma cell line HepG2 and the protozoa Tetrahymena pyriformis—a comparison. Environ Toxicol 26:171–186CrossRefGoogle Scholar Sanderson H, Johnson DJ, Wilson CJ, Brain RA, Solomon KR (2003) Probabilistic hazard assessment of environmentally occurring pharmaceuticals toxicity to fish, daphnids and algae by ECOSAR screening. Toxicol Lett 144:383–395CrossRefGoogle Scholar Schnell S, Bols NC, Barata C, Porte C (2009) Single and combined toxicity of pharmaceuticals and personal care products (PPCPs) on the rainbow trout liver cell line RTL-W1. Aquat Toxicol 93:244–252CrossRefGoogle Scholar Sprague JB, Ramsay BA (1965) Lethal levels of mixed copper-zinc solutions for juvenile salmon. J Fish Res Board Can 22:425–432CrossRefGoogle Scholar Ternes T, Hirsch R, Mueller J, Haberer K (1998) Methods for the determination of neutral drugs as well as beta blockers and alpha2-sympathomimetics in aqueous matrices using GC/MS and LC/MS/MS. Fresenius J Anal Chem 362:329–340CrossRefGoogle Scholar Wiegel S, Aulinger A, Brockmeyer R, Harms H, Loffler J, Reincke H, Schmidt R, Stachel B, Von TW, Wanke A (2004) Pharmaceuticals in the river Elbe and its tributaries. Chemosphere 57:107–126CrossRefGoogle Scholar Zhang XJ, Qin HW, Su LM, Qin WC, Zou MY, Sheng LX, Zhao YH, Abraham MH (2010) Interspecies correlations of toxicity to eight aquatic organisms: theoretical considerations. Sci Total Environ 408:4549–4555CrossRefGoogle Scholar © Springer International Publishing Switzerland 2014 1.Emergency and Disaster Management DepartmentNamik Kemal University, School of HealthÇorluTurkey 2.Department of PharmacologyTrakya University, Faculty of MedicineEdirneTurkey 3.Department of ChemistryTrakya University, Faculty of ScienceEdirneTurkey Dökmeci, A.H., Dökmeci, I. & Ibar, H. Environ. Process. (2014) 1: 95. https://doi.org/10.1007/s40710-014-0009-7 Published on behalf of the European Water Resources Association
CommonCrawl